Test Report: Docker_Linux_docker_arm64 19640

                    
                      e5b440675da001c9bcd97e7df406aef1ef05cbc8:2024-09-14:36202
                    
                

Test fail (1/343)

Order failed test Duration
33 TestAddons/parallel/Registry 73.52
x
+
TestAddons/parallel/Registry (73.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 3.114446ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-696bs" [7aa69fd1-c981-411c-bdd9-00cacd8b1736] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003713001s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-sgv4q" [a5a96a3a-0fec-4005-b778-df5a7261b085] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004181314s
addons_test.go:342: (dbg) Run:  kubectl --context addons-467916 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-467916 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Non-zero exit: kubectl --context addons-467916 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.114966046s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:349: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-467916 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:353: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 ip
2024/09/13 23:40:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-467916
helpers_test.go:235: (dbg) docker inspect addons-467916:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3e46c205fd9fdf9936595f6805f92ef284ae2b2eb199ae5ba7bc069101ccb7ee",
	        "Created": "2024-09-13T23:27:04.116129171Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8789,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-13T23:27:04.30290694Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fe3365929e6ce54b4c06f0bc3d1500dff08f535844ef4978f2c45cd67c542134",
	        "ResolvConfPath": "/var/lib/docker/containers/3e46c205fd9fdf9936595f6805f92ef284ae2b2eb199ae5ba7bc069101ccb7ee/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3e46c205fd9fdf9936595f6805f92ef284ae2b2eb199ae5ba7bc069101ccb7ee/hostname",
	        "HostsPath": "/var/lib/docker/containers/3e46c205fd9fdf9936595f6805f92ef284ae2b2eb199ae5ba7bc069101ccb7ee/hosts",
	        "LogPath": "/var/lib/docker/containers/3e46c205fd9fdf9936595f6805f92ef284ae2b2eb199ae5ba7bc069101ccb7ee/3e46c205fd9fdf9936595f6805f92ef284ae2b2eb199ae5ba7bc069101ccb7ee-json.log",
	        "Name": "/addons-467916",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-467916:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-467916",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/47ee68dcf8229ccc0c2a9065adf5a1c7211d5ae7bae61b2ec57b78a1d00ef3cd-init/diff:/var/lib/docker/overlay2/3914b9ea48552f1ad87d259f6b4331a4fff9b014ba19c452962370626d811e7e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/47ee68dcf8229ccc0c2a9065adf5a1c7211d5ae7bae61b2ec57b78a1d00ef3cd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/47ee68dcf8229ccc0c2a9065adf5a1c7211d5ae7bae61b2ec57b78a1d00ef3cd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/47ee68dcf8229ccc0c2a9065adf5a1c7211d5ae7bae61b2ec57b78a1d00ef3cd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-467916",
	                "Source": "/var/lib/docker/volumes/addons-467916/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-467916",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-467916",
	                "name.minikube.sigs.k8s.io": "addons-467916",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d668f685b149ec02541db5d470038ced85653ac9194d9fce6a340eea8781e419",
	            "SandboxKey": "/var/run/docker/netns/d668f685b149",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-467916": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1afd9de1c82fe6aec8833cf8738e974d4695724c6955f908d7f7be812ec356b6",
	                    "EndpointID": "d97d337eb2d7924858814dabd7febf98883653e3928a467469f8513eca34baf7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-467916",
	                        "3e46c205fd9f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-467916 -n addons-467916
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-467916 logs -n 25: (1.263189832s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-385391   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p download-only-385391                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p download-only-385391                                                                     | download-only-385391   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | -o=json --download-only                                                                     | download-only-915155   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p download-only-915155                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p download-only-915155                                                                     | download-only-915155   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p download-only-385391                                                                     | download-only-385391   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p download-only-915155                                                                     | download-only-915155   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | --download-only -p                                                                          | download-docker-757156 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | download-docker-757156                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-757156                                                                   | download-docker-757156 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-598186   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | binary-mirror-598186                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36811                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-598186                                                                     | binary-mirror-598186   | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| addons  | disable dashboard -p                                                                        | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | addons-467916                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | addons-467916                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-467916 --wait=true                                                                | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:30 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-467916 addons disable                                                                | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:30 UTC | 13 Sep 24 23:31 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-467916 addons disable                                                                | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | -p addons-467916                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-467916 ssh cat                                                                       | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC | 13 Sep 24 23:39 UTC |
	|         | /opt/local-path-provisioner/pvc-57a7b523-7db8-4825-ad64-698dbbbd6c68_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-467916 addons disable                                                                | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:39 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-467916 ip                                                                            | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:40 UTC | 13 Sep 24 23:40 UTC |
	| addons  | addons-467916 addons disable                                                                | addons-467916          | jenkins | v1.34.0 | 13 Sep 24 23:40 UTC | 13 Sep 24 23:40 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:39
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:39.577701    8290 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:39.577894    8290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:39.577912    8290 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:39.577937    8290 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:39.578310    8290 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	I0913 23:26:39.578910    8290 out.go:352] Setting JSON to false
	I0913 23:26:39.579742    8290 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":547,"bootTime":1726269452,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 23:26:39.579843    8290 start.go:139] virtualization:  
	I0913 23:26:39.583146    8290 out.go:177] * [addons-467916] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 23:26:39.586117    8290 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:26:39.586169    8290 notify.go:220] Checking for updates...
	I0913 23:26:39.591437    8290 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:39.593791    8290 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	I0913 23:26:39.596783    8290 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	I0913 23:26:39.599059    8290 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 23:26:39.601531    8290 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:26:39.604094    8290 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:26:39.635910    8290 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:26:39.636048    8290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:39.695645    8290 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 23:26:39.686432056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:26:39.695752    8290 docker.go:318] overlay module found
	I0913 23:26:39.699557    8290 out.go:177] * Using the docker driver based on user configuration
	I0913 23:26:39.701375    8290 start.go:297] selected driver: docker
	I0913 23:26:39.701394    8290 start.go:901] validating driver "docker" against <nil>
	I0913 23:26:39.701409    8290 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:26:39.702039    8290 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:39.755309    8290 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 23:26:39.745739903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:26:39.755528    8290 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:26:39.755775    8290 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:26:39.757923    8290 out.go:177] * Using Docker driver with root privileges
	I0913 23:26:39.760147    8290 cni.go:84] Creating CNI manager for ""
	I0913 23:26:39.760227    8290 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:26:39.760242    8290 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:26:39.760410    8290 start.go:340] cluster config:
	{Name:addons-467916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-467916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:26:39.762534    8290 out.go:177] * Starting "addons-467916" primary control-plane node in "addons-467916" cluster
	I0913 23:26:39.764555    8290 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 23:26:39.766328    8290 out.go:177] * Pulling base image v0.0.45-1726243947-19640 ...
	I0913 23:26:39.768386    8290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:26:39.768437    8290 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 23:26:39.768448    8290 cache.go:56] Caching tarball of preloaded images
	I0913 23:26:39.768484    8290 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0913 23:26:39.768541    8290 preload.go:172] Found /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0913 23:26:39.768552    8290 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0913 23:26:39.768903    8290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/config.json ...
	I0913 23:26:39.768931    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/config.json: {Name:mk04149802514247e0a4c072b53e2179beee3eef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:26:39.783909    8290 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0913 23:26:39.784032    8290 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0913 23:26:39.784056    8290 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0913 23:26:39.784062    8290 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0913 23:26:39.784069    8290 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0913 23:26:39.784078    8290 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from local cache
	I0913 23:26:57.054717    8290 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 from cached tarball
	I0913 23:26:57.054758    8290 cache.go:194] Successfully downloaded all kic artifacts
	I0913 23:26:57.054788    8290 start.go:360] acquireMachinesLock for addons-467916: {Name:mk5ad12b981cfbd5c2639bc0eec7dbb61e9fe6ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0913 23:26:57.054927    8290 start.go:364] duration metric: took 106.387µs to acquireMachinesLock for "addons-467916"
	I0913 23:26:57.054958    8290 start.go:93] Provisioning new machine with config: &{Name:addons-467916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-467916 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 23:26:57.055043    8290 start.go:125] createHost starting for "" (driver="docker")
	I0913 23:26:57.057453    8290 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0913 23:26:57.057702    8290 start.go:159] libmachine.API.Create for "addons-467916" (driver="docker")
	I0913 23:26:57.057742    8290 client.go:168] LocalClient.Create starting
	I0913 23:26:57.057886    8290 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca.pem
	I0913 23:26:57.273905    8290 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/cert.pem
	I0913 23:26:57.594714    8290 cli_runner.go:164] Run: docker network inspect addons-467916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0913 23:26:57.610530    8290 cli_runner.go:211] docker network inspect addons-467916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0913 23:26:57.610629    8290 network_create.go:284] running [docker network inspect addons-467916] to gather additional debugging logs...
	I0913 23:26:57.610647    8290 cli_runner.go:164] Run: docker network inspect addons-467916
	W0913 23:26:57.626440    8290 cli_runner.go:211] docker network inspect addons-467916 returned with exit code 1
	I0913 23:26:57.626470    8290 network_create.go:287] error running [docker network inspect addons-467916]: docker network inspect addons-467916: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-467916 not found
	I0913 23:26:57.626483    8290 network_create.go:289] output of [docker network inspect addons-467916]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-467916 not found
	
	** /stderr **
	I0913 23:26:57.626591    8290 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0913 23:26:57.642366    8290 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ac60f0}
	I0913 23:26:57.642419    8290 network_create.go:124] attempt to create docker network addons-467916 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0913 23:26:57.642493    8290 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-467916 addons-467916
	I0913 23:26:57.716159    8290 network_create.go:108] docker network addons-467916 192.168.49.0/24 created
	I0913 23:26:57.716201    8290 kic.go:121] calculated static IP "192.168.49.2" for the "addons-467916" container
	I0913 23:26:57.716315    8290 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0913 23:26:57.731375    8290 cli_runner.go:164] Run: docker volume create addons-467916 --label name.minikube.sigs.k8s.io=addons-467916 --label created_by.minikube.sigs.k8s.io=true
	I0913 23:26:57.748703    8290 oci.go:103] Successfully created a docker volume addons-467916
	I0913 23:26:57.748792    8290 cli_runner.go:164] Run: docker run --rm --name addons-467916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467916 --entrypoint /usr/bin/test -v addons-467916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib
	I0913 23:26:59.952003    8290 cli_runner.go:217] Completed: docker run --rm --name addons-467916-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467916 --entrypoint /usr/bin/test -v addons-467916:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -d /var/lib: (2.203166333s)
	I0913 23:26:59.952030    8290 oci.go:107] Successfully prepared a docker volume addons-467916
	I0913 23:26:59.952071    8290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:26:59.952090    8290 kic.go:194] Starting extracting preloaded images to volume ...
	I0913 23:26:59.952153    8290 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-467916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir
	I0913 23:27:04.033539    8290 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-467916:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 -I lz4 -xf /preloaded.tar -C /extractDir: (4.081336528s)
	I0913 23:27:04.033575    8290 kic.go:203] duration metric: took 4.081480519s to extract preloaded images to volume ...
	W0913 23:27:04.033728    8290 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0913 23:27:04.033851    8290 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0913 23:27:04.101608    8290 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-467916 --name addons-467916 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467916 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-467916 --network addons-467916 --ip 192.168.49.2 --volume addons-467916:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243
	I0913 23:27:04.472068    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Running}}
	I0913 23:27:04.492723    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:04.514098    8290 cli_runner.go:164] Run: docker exec addons-467916 stat /var/lib/dpkg/alternatives/iptables
	I0913 23:27:04.588593    8290 oci.go:144] the created container "addons-467916" has a running status.
	I0913 23:27:04.588623    8290 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa...
	I0913 23:27:05.902224    8290 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0913 23:27:05.921213    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:05.936789    8290 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0913 23:27:05.936815    8290 kic_runner.go:114] Args: [docker exec --privileged addons-467916 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0913 23:27:05.991015    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:06.025942    8290 machine.go:93] provisionDockerMachine start ...
	I0913 23:27:06.026044    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:06.043849    8290 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:06.044129    8290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:06.044148    8290 main.go:141] libmachine: About to run SSH command:
	hostname
	I0913 23:27:06.163944    8290 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-467916
	
	I0913 23:27:06.163969    8290 ubuntu.go:169] provisioning hostname "addons-467916"
	I0913 23:27:06.164031    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:06.181336    8290 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:06.181582    8290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:06.181600    8290 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-467916 && echo "addons-467916" | sudo tee /etc/hostname
	I0913 23:27:06.315760    8290 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-467916
	
	I0913 23:27:06.315839    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:06.333635    8290 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:06.333889    8290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:06.333912    8290 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-467916' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-467916/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-467916' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0913 23:27:06.452438    8290 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0913 23:27:06.452467    8290 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19640-2224/.minikube CaCertPath:/home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19640-2224/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19640-2224/.minikube}
	I0913 23:27:06.452498    8290 ubuntu.go:177] setting up certificates
	I0913 23:27:06.452509    8290 provision.go:84] configureAuth start
	I0913 23:27:06.452572    8290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467916
	I0913 23:27:06.469718    8290 provision.go:143] copyHostCerts
	I0913 23:27:06.469801    8290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19640-2224/.minikube/key.pem (1679 bytes)
	I0913 23:27:06.469928    8290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19640-2224/.minikube/ca.pem (1082 bytes)
	I0913 23:27:06.469993    8290 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19640-2224/.minikube/cert.pem (1123 bytes)
	I0913 23:27:06.470045    8290 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19640-2224/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca-key.pem org=jenkins.addons-467916 san=[127.0.0.1 192.168.49.2 addons-467916 localhost minikube]
	I0913 23:27:07.304409    8290 provision.go:177] copyRemoteCerts
	I0913 23:27:07.304473    8290 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0913 23:27:07.304522    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:07.320622    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:07.408812    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0913 23:27:07.432584    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0913 23:27:07.456076    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0913 23:27:07.479287    8290 provision.go:87] duration metric: took 1.026752193s to configureAuth
	I0913 23:27:07.479309    8290 ubuntu.go:193] setting minikube options for container-runtime
	I0913 23:27:07.479493    8290 config.go:182] Loaded profile config "addons-467916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:27:07.479559    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:07.495849    8290 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:07.496093    8290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:07.496107    8290 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0913 23:27:07.616432    8290 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0913 23:27:07.616451    8290 ubuntu.go:71] root file system type: overlay
	I0913 23:27:07.616571    8290 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0913 23:27:07.616640    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:07.633127    8290 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:07.633439    8290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:07.633529    8290 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0913 23:27:07.763478    8290 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0913 23:27:07.763569    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:07.780828    8290 main.go:141] libmachine: Using SSH client type: native
	I0913 23:27:07.781069    8290 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0913 23:27:07.781086    8290 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0913 23:27:08.537114    8290 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-06 12:06:36.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-13 23:27:07.756597931 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0913 23:27:08.537151    8290 machine.go:96] duration metric: took 2.511184289s to provisionDockerMachine
	I0913 23:27:08.537163    8290 client.go:171] duration metric: took 11.479410389s to LocalClient.Create
	I0913 23:27:08.537175    8290 start.go:167] duration metric: took 11.479473552s to libmachine.API.Create "addons-467916"
	I0913 23:27:08.537187    8290 start.go:293] postStartSetup for "addons-467916" (driver="docker")
	I0913 23:27:08.537197    8290 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0913 23:27:08.537264    8290 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0913 23:27:08.537306    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:08.554478    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:08.645199    8290 ssh_runner.go:195] Run: cat /etc/os-release
	I0913 23:27:08.648356    8290 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0913 23:27:08.648401    8290 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0913 23:27:08.648414    8290 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0913 23:27:08.648424    8290 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0913 23:27:08.648437    8290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-2224/.minikube/addons for local assets ...
	I0913 23:27:08.648503    8290 filesync.go:126] Scanning /home/jenkins/minikube-integration/19640-2224/.minikube/files for local assets ...
	I0913 23:27:08.648527    8290 start.go:296] duration metric: took 111.334592ms for postStartSetup
	I0913 23:27:08.648835    8290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467916
	I0913 23:27:08.665794    8290 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/config.json ...
	I0913 23:27:08.666084    8290 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:27:08.666139    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:08.682974    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:08.768782    8290 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0913 23:27:08.773074    8290 start.go:128] duration metric: took 11.718015545s to createHost
	I0913 23:27:08.773097    8290 start.go:83] releasing machines lock for "addons-467916", held for 11.718157s
	I0913 23:27:08.773195    8290 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467916
	I0913 23:27:08.789224    8290 ssh_runner.go:195] Run: cat /version.json
	I0913 23:27:08.789274    8290 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0913 23:27:08.789354    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:08.789280    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:08.806526    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:08.816743    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:08.899621    8290 ssh_runner.go:195] Run: systemctl --version
	I0913 23:27:09.035483    8290 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0913 23:27:09.040040    8290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0913 23:27:09.065645    8290 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0913 23:27:09.065783    8290 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0913 23:27:09.095765    8290 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0913 23:27:09.095791    8290 start.go:495] detecting cgroup driver to use...
	I0913 23:27:09.095824    8290 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 23:27:09.095929    8290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:09.112003    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0913 23:27:09.122540    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0913 23:27:09.132264    8290 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0913 23:27:09.132389    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0913 23:27:09.142755    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 23:27:09.153155    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0913 23:27:09.163223    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0913 23:27:09.173089    8290 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0913 23:27:09.182761    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0913 23:27:09.192632    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0913 23:27:09.202345    8290 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0913 23:27:09.212681    8290 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0913 23:27:09.221159    8290 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0913 23:27:09.229652    8290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:09.309485    8290 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0913 23:27:09.408877    8290 start.go:495] detecting cgroup driver to use...
	I0913 23:27:09.408923    8290 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0913 23:27:09.408977    8290 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0913 23:27:09.422917    8290 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0913 23:27:09.422984    8290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0913 23:27:09.435329    8290 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0913 23:27:09.454174    8290 ssh_runner.go:195] Run: which cri-dockerd
	I0913 23:27:09.458214    8290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0913 23:27:09.469169    8290 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0913 23:27:09.490144    8290 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0913 23:27:09.597464    8290 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0913 23:27:09.689195    8290 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0913 23:27:09.689346    8290 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0913 23:27:09.709582    8290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:09.807979    8290 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0913 23:27:10.115668    8290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0913 23:27:10.128742    8290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 23:27:10.141686    8290 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0913 23:27:10.228646    8290 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0913 23:27:10.326328    8290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:10.414181    8290 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0913 23:27:10.429596    8290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0913 23:27:10.441352    8290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:10.531405    8290 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0913 23:27:10.609864    8290 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0913 23:27:10.610051    8290 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0913 23:27:10.615269    8290 start.go:563] Will wait 60s for crictl version
	I0913 23:27:10.615445    8290 ssh_runner.go:195] Run: which crictl
	I0913 23:27:10.619765    8290 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0913 23:27:10.657458    8290 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.2.1
	RuntimeApiVersion:  v1
	I0913 23:27:10.657541    8290 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 23:27:10.681378    8290 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0913 23:27:10.707658    8290 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.2.1 ...
	I0913 23:27:10.707820    8290 cli_runner.go:164] Run: docker network inspect addons-467916 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0913 23:27:10.725569    8290 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0913 23:27:10.729528    8290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:10.740560    8290 kubeadm.go:883] updating cluster {Name:addons-467916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-467916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0913 23:27:10.740676    8290 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:27:10.740729    8290 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 23:27:10.759116    8290 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 23:27:10.759136    8290 docker.go:615] Images already preloaded, skipping extraction
	I0913 23:27:10.759197    8290 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0913 23:27:10.776482    8290 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0913 23:27:10.776506    8290 cache_images.go:84] Images are preloaded, skipping loading
	I0913 23:27:10.776515    8290 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0913 23:27:10.776610    8290 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-467916 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-467916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0913 23:27:10.776678    8290 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0913 23:27:10.821874    8290 cni.go:84] Creating CNI manager for ""
	I0913 23:27:10.821898    8290 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:27:10.821908    8290 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0913 23:27:10.821927    8290 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-467916 NodeName:addons-467916 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0913 23:27:10.822074    8290 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-467916"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0913 23:27:10.822145    8290 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0913 23:27:10.831044    8290 binaries.go:44] Found k8s binaries, skipping transfer
	I0913 23:27:10.831110    8290 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0913 23:27:10.839884    8290 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0913 23:27:10.858089    8290 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0913 23:27:10.876064    8290 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0913 23:27:10.895420    8290 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0913 23:27:10.898725    8290 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0913 23:27:10.909860    8290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:11.002206    8290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:27:11.018917    8290 certs.go:68] Setting up /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916 for IP: 192.168.49.2
	I0913 23:27:11.018942    8290 certs.go:194] generating shared ca certs ...
	I0913 23:27:11.018958    8290 certs.go:226] acquiring lock for ca certs: {Name:mke6c9de844bb9fc9b4944cb0ab16376aa09d478 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:11.019091    8290 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19640-2224/.minikube/ca.key
	I0913 23:27:11.322382    8290 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-2224/.minikube/ca.crt ...
	I0913 23:27:11.322414    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/ca.crt: {Name:mk46c833877310472e94b4a49705826b23ed2b0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:11.322616    8290 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-2224/.minikube/ca.key ...
	I0913 23:27:11.322630    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/ca.key: {Name:mk385278f95ba86b734588deabd10759857dd2d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:11.322723    8290 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19640-2224/.minikube/proxy-client-ca.key
	I0913 23:27:11.636113    8290 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-2224/.minikube/proxy-client-ca.crt ...
	I0913 23:27:11.636144    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/proxy-client-ca.crt: {Name:mkf7a697d6d5f4cfbfbd712ad76d0f10bc98cf0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:11.636341    8290 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-2224/.minikube/proxy-client-ca.key ...
	I0913 23:27:11.636355    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/proxy-client-ca.key: {Name:mk29b2c582910936b929af0689dde79db4f28a5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:11.636441    8290 certs.go:256] generating profile certs ...
	I0913 23:27:11.636536    8290 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.key
	I0913 23:27:11.636564    8290 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt with IP's: []
	I0913 23:27:12.170490    8290 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt ...
	I0913 23:27:12.170522    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: {Name:mka3cac971e055918c946fa5a3e7d77ada67330a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:12.170742    8290 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.key ...
	I0913 23:27:12.170755    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.key: {Name:mk15c3fa72a3141637a77c84cf468fba4621e57d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:12.170838    8290 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.key.ee110447
	I0913 23:27:12.170861    8290 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.crt.ee110447 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0913 23:27:12.389799    8290 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.crt.ee110447 ...
	I0913 23:27:12.389829    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.crt.ee110447: {Name:mk4124a31d6970143e80a930c1148659c7179b2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:12.390003    8290 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.key.ee110447 ...
	I0913 23:27:12.390018    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.key.ee110447: {Name:mk846e1c22a4e5922e9d08f5162d8f8865de342d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:12.390105    8290 certs.go:381] copying /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.crt.ee110447 -> /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.crt
	I0913 23:27:12.390188    8290 certs.go:385] copying /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.key.ee110447 -> /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.key
	I0913 23:27:12.390247    8290 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/proxy-client.key
	I0913 23:27:12.390266    8290 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/proxy-client.crt with IP's: []
	I0913 23:27:12.662509    8290 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/proxy-client.crt ...
	I0913 23:27:12.662538    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/proxy-client.crt: {Name:mkc0458b0b97db08c69fd3b979d0d6b6338ae757 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:12.662698    8290 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/proxy-client.key ...
	I0913 23:27:12.662712    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/proxy-client.key: {Name:mk8c38fbbd0735c6f51650a63095bfcbaab44887 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:12.662884    8290 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca-key.pem (1679 bytes)
	I0913 23:27:12.662926    8290 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/ca.pem (1082 bytes)
	I0913 23:27:12.662957    8290 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/cert.pem (1123 bytes)
	I0913 23:27:12.662986    8290 certs.go:484] found cert: /home/jenkins/minikube-integration/19640-2224/.minikube/certs/key.pem (1679 bytes)
	I0913 23:27:12.663574    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0913 23:27:12.689111    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0913 23:27:12.712935    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0913 23:27:12.737071    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0913 23:27:12.761156    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0913 23:27:12.785686    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0913 23:27:12.808516    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0913 23:27:12.831646    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0913 23:27:12.855623    8290 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19640-2224/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0913 23:27:12.879806    8290 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0913 23:27:12.897405    8290 ssh_runner.go:195] Run: openssl version
	I0913 23:27:12.902743    8290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0913 23:27:12.911980    8290 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:12.915298    8290 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 13 23:27 /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:12.915408    8290 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0913 23:27:12.922217    8290 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0913 23:27:12.931232    8290 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0913 23:27:12.934527    8290 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0913 23:27:12.934572    8290 kubeadm.go:392] StartCluster: {Name:addons-467916 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-467916 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:27:12.934745    8290 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0913 23:27:12.952155    8290 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0913 23:27:12.961003    8290 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0913 23:27:12.969686    8290 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0913 23:27:12.969776    8290 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0913 23:27:12.978781    8290 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0913 23:27:12.978802    8290 kubeadm.go:157] found existing configuration files:
	
	I0913 23:27:12.978853    8290 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0913 23:27:12.987480    8290 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0913 23:27:12.987558    8290 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0913 23:27:12.996140    8290 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0913 23:27:13.006914    8290 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0913 23:27:13.006985    8290 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0913 23:27:13.017016    8290 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0913 23:27:13.026980    8290 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0913 23:27:13.027098    8290 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0913 23:27:13.035693    8290 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0913 23:27:13.044581    8290 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0913 23:27:13.044696    8290 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0913 23:27:13.053396    8290 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0913 23:27:13.099172    8290 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0913 23:27:13.099513    8290 kubeadm.go:310] [preflight] Running pre-flight checks
	I0913 23:27:13.122115    8290 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0913 23:27:13.122189    8290 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0913 23:27:13.122232    8290 kubeadm.go:310] OS: Linux
	I0913 23:27:13.122285    8290 kubeadm.go:310] CGROUPS_CPU: enabled
	I0913 23:27:13.122338    8290 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0913 23:27:13.122387    8290 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0913 23:27:13.122439    8290 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0913 23:27:13.122492    8290 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0913 23:27:13.122543    8290 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0913 23:27:13.122592    8290 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0913 23:27:13.122643    8290 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0913 23:27:13.122693    8290 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0913 23:27:13.185441    8290 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0913 23:27:13.185552    8290 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0913 23:27:13.185682    8290 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0913 23:27:13.197089    8290 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0913 23:27:13.201743    8290 out.go:235]   - Generating certificates and keys ...
	I0913 23:27:13.201923    8290 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0913 23:27:13.202018    8290 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0913 23:27:14.524181    8290 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0913 23:27:14.832188    8290 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0913 23:27:15.428940    8290 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0913 23:27:16.231128    8290 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0913 23:27:16.576630    8290 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0913 23:27:16.576778    8290 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-467916 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0913 23:27:17.231345    8290 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0913 23:27:17.231508    8290 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-467916 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0913 23:27:18.063218    8290 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0913 23:27:18.508888    8290 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0913 23:27:18.817793    8290 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0913 23:27:18.818053    8290 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0913 23:27:19.383649    8290 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0913 23:27:20.631687    8290 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0913 23:27:21.014276    8290 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0913 23:27:22.516190    8290 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0913 23:27:22.706009    8290 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0913 23:27:22.706742    8290 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0913 23:27:22.709899    8290 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0913 23:27:22.712880    8290 out.go:235]   - Booting up control plane ...
	I0913 23:27:22.712989    8290 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0913 23:27:22.713065    8290 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0913 23:27:22.713537    8290 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0913 23:27:22.724149    8290 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0913 23:27:22.730590    8290 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0913 23:27:22.730643    8290 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0913 23:27:22.840871    8290 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0913 23:27:22.840990    8290 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0913 23:27:24.338586    8290 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.500799391s
	I0913 23:27:24.338674    8290 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0913 23:27:30.839681    8290 kubeadm.go:310] [api-check] The API server is healthy after 6.501077284s
	I0913 23:27:30.860677    8290 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0913 23:27:30.879205    8290 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0913 23:27:30.904955    8290 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0913 23:27:30.905344    8290 kubeadm.go:310] [mark-control-plane] Marking the node addons-467916 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0913 23:27:30.916650    8290 kubeadm.go:310] [bootstrap-token] Using token: ns0d5k.dszbfz5xhw6jyb83
	I0913 23:27:30.918780    8290 out.go:235]   - Configuring RBAC rules ...
	I0913 23:27:30.918908    8290 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0913 23:27:30.926893    8290 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0913 23:27:30.936857    8290 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0913 23:27:30.941065    8290 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0913 23:27:30.945468    8290 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0913 23:27:30.949722    8290 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0913 23:27:31.247305    8290 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0913 23:27:31.674618    8290 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0913 23:27:32.246386    8290 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0913 23:27:32.247448    8290 kubeadm.go:310] 
	I0913 23:27:32.247522    8290 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0913 23:27:32.247536    8290 kubeadm.go:310] 
	I0913 23:27:32.247614    8290 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0913 23:27:32.247626    8290 kubeadm.go:310] 
	I0913 23:27:32.247651    8290 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0913 23:27:32.247713    8290 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0913 23:27:32.247767    8290 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0913 23:27:32.247775    8290 kubeadm.go:310] 
	I0913 23:27:32.247830    8290 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0913 23:27:32.247838    8290 kubeadm.go:310] 
	I0913 23:27:32.247885    8290 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0913 23:27:32.247892    8290 kubeadm.go:310] 
	I0913 23:27:32.247944    8290 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0913 23:27:32.248021    8290 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0913 23:27:32.248091    8290 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0913 23:27:32.248100    8290 kubeadm.go:310] 
	I0913 23:27:32.248182    8290 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0913 23:27:32.248260    8290 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0913 23:27:32.248269    8290 kubeadm.go:310] 
	I0913 23:27:32.248375    8290 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ns0d5k.dszbfz5xhw6jyb83 \
	I0913 23:27:32.248480    8290 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a02fafd37eb5697b39cc448a810e806b543dd44de2192168fd6cb8b8e596eb08 \
	I0913 23:27:32.248504    8290 kubeadm.go:310] 	--control-plane 
	I0913 23:27:32.248512    8290 kubeadm.go:310] 
	I0913 23:27:32.248596    8290 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0913 23:27:32.248604    8290 kubeadm.go:310] 
	I0913 23:27:32.248684    8290 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ns0d5k.dszbfz5xhw6jyb83 \
	I0913 23:27:32.248788    8290 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:a02fafd37eb5697b39cc448a810e806b543dd44de2192168fd6cb8b8e596eb08 
	I0913 23:27:32.252582    8290 kubeadm.go:310] W0913 23:27:13.095124    1811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:32.252879    8290 kubeadm.go:310] W0913 23:27:13.096608    1811 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0913 23:27:32.253091    8290 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0913 23:27:32.253201    8290 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0913 23:27:32.253219    8290 cni.go:84] Creating CNI manager for ""
	I0913 23:27:32.253237    8290 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:27:32.255979    8290 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0913 23:27:32.258118    8290 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0913 23:27:32.266923    8290 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0913 23:27:32.285436    8290 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0913 23:27:32.285561    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:32.285634    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-467916 minikube.k8s.io/updated_at=2024_09_13T23_27_32_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18 minikube.k8s.io/name=addons-467916 minikube.k8s.io/primary=true
	I0913 23:27:32.423604    8290 ops.go:34] apiserver oom_adj: -16
	I0913 23:27:32.423690    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:32.924386    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:33.424531    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:33.923837    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:34.424469    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:34.924539    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:35.424556    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:35.923963    8290 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0913 23:27:36.025741    8290 kubeadm.go:1113] duration metric: took 3.740225923s to wait for elevateKubeSystemPrivileges
	I0913 23:27:36.025774    8290 kubeadm.go:394] duration metric: took 23.091206103s to StartCluster
	I0913 23:27:36.025793    8290 settings.go:142] acquiring lock: {Name:mk4d87a2d88a9eb90d9feb46e3214fc54dd9b060 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:36.025931    8290 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19640-2224/kubeconfig
	I0913 23:27:36.026373    8290 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/kubeconfig: {Name:mk7508402d861ffa6f5702c641cc45695788b09a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:27:36.026612    8290 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0913 23:27:36.026718    8290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0913 23:27:36.026983    8290 config.go:182] Loaded profile config "addons-467916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:27:36.027030    8290 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0913 23:27:36.027107    8290 addons.go:69] Setting yakd=true in profile "addons-467916"
	I0913 23:27:36.027124    8290 addons.go:234] Setting addon yakd=true in "addons-467916"
	I0913 23:27:36.027151    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.027719    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.028234    8290 addons.go:69] Setting cloud-spanner=true in profile "addons-467916"
	I0913 23:27:36.028258    8290 addons.go:234] Setting addon cloud-spanner=true in "addons-467916"
	I0913 23:27:36.028316    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.028566    8290 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-467916"
	I0913 23:27:36.028582    8290 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-467916"
	I0913 23:27:36.028607    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.028826    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.029039    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.031818    8290 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-467916"
	I0913 23:27:36.031949    8290 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-467916"
	I0913 23:27:36.032503    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.035304    8290 addons.go:69] Setting default-storageclass=true in profile "addons-467916"
	I0913 23:27:36.035401    8290 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-467916"
	I0913 23:27:36.036141    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.032400    8290 addons.go:69] Setting registry=true in profile "addons-467916"
	I0913 23:27:36.036695    8290 addons.go:234] Setting addon registry=true in "addons-467916"
	I0913 23:27:36.036729    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.036821    8290 addons.go:69] Setting gcp-auth=true in profile "addons-467916"
	I0913 23:27:36.036846    8290 mustload.go:65] Loading cluster: addons-467916
	I0913 23:27:36.037024    8290 config.go:182] Loaded profile config "addons-467916": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:27:36.037157    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.037326    8290 addons.go:69] Setting ingress=true in profile "addons-467916"
	I0913 23:27:36.037345    8290 addons.go:234] Setting addon ingress=true in "addons-467916"
	I0913 23:27:36.037376    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.037791    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.032409    8290 addons.go:69] Setting storage-provisioner=true in profile "addons-467916"
	I0913 23:27:36.046439    8290 addons.go:234] Setting addon storage-provisioner=true in "addons-467916"
	I0913 23:27:36.046516    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.046547    8290 out.go:177] * Verifying Kubernetes components...
	I0913 23:27:36.047040    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.047945    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.056582    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.046439    8290 addons.go:69] Setting ingress-dns=true in profile "addons-467916"
	I0913 23:27:36.062001    8290 addons.go:234] Setting addon ingress-dns=true in "addons-467916"
	I0913 23:27:36.062071    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.062588    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.032421    8290 addons.go:69] Setting volcano=true in profile "addons-467916"
	I0913 23:27:36.072594    8290 addons.go:234] Setting addon volcano=true in "addons-467916"
	I0913 23:27:36.072667    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.073211    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.032424    8290 addons.go:69] Setting volumesnapshots=true in profile "addons-467916"
	I0913 23:27:36.090107    8290 addons.go:234] Setting addon volumesnapshots=true in "addons-467916"
	I0913 23:27:36.090166    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.090683    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.046451    8290 addons.go:69] Setting inspektor-gadget=true in profile "addons-467916"
	I0913 23:27:36.108663    8290 addons.go:234] Setting addon inspektor-gadget=true in "addons-467916"
	I0913 23:27:36.108707    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.109390    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.112637    8290 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0913 23:27:36.046456    8290 addons.go:69] Setting metrics-server=true in profile "addons-467916"
	I0913 23:27:36.137545    8290 addons.go:234] Setting addon metrics-server=true in "addons-467916"
	I0913 23:27:36.137586    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.138062    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.032416    8290 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-467916"
	I0913 23:27:36.162731    8290 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-467916"
	I0913 23:27:36.163073    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.236258    8290 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0913 23:27:36.238882    8290 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0913 23:27:36.242083    8290 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:27:36.243856    8290 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0913 23:27:36.243996    8290 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0913 23:27:36.245953    8290 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0913 23:27:36.247866    8290 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0913 23:27:36.247885    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0913 23:27:36.247948    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.250347    8290 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:27:36.252733    8290 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:27:36.252794    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0913 23:27:36.252890    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.284262    8290 out.go:177]   - Using image docker.io/registry:2.8.3
	I0913 23:27:36.284443    8290 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0913 23:27:36.284458    8290 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0913 23:27:36.284543    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.285737    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.308055    8290 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0913 23:27:36.308075    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0913 23:27:36.308137    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.335274    8290 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:27:36.335300    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0913 23:27:36.335354    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.339405    8290 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0913 23:27:36.340414    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0913 23:27:36.344362    8290 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0913 23:27:36.344389    8290 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0913 23:27:36.344464    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.352248    8290 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0913 23:27:36.353565    8290 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0913 23:27:36.356429    8290 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0913 23:27:36.357213    8290 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0913 23:27:36.357293    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0913 23:27:36.367203    8290 addons.go:234] Setting addon default-storageclass=true in "addons-467916"
	I0913 23:27:36.397272    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.397845    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.409319    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0913 23:27:36.409745    8290 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0913 23:27:36.412538    8290 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:36.412556    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0913 23:27:36.412629    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.435411    8290 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0913 23:27:36.435486    8290 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0913 23:27:36.435599    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.383366    8290 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 23:27:36.452432    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0913 23:27:36.452462    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0913 23:27:36.455463    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0913 23:27:36.458214    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0913 23:27:36.460476    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0913 23:27:36.462734    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0913 23:27:36.466483    8290 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0913 23:27:36.452508    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.473198    8290 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0913 23:27:36.473534    8290 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0913 23:27:36.473790    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.474671    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.476344    8290 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-467916"
	I0913 23:27:36.476380    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:36.476869    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:36.477116    8290 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:27:36.477126    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0913 23:27:36.477164    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.490447    8290 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0913 23:27:36.490470    8290 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0913 23:27:36.490544    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.490819    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.524524    8290 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0913 23:27:36.524546    8290 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0913 23:27:36.524620    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.525208    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.544968    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.570506    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.592515    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.596146    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.601719    8290 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:36.601740    8290 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0913 23:27:36.601812    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.640561    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.647304    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.673519    8290 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0913 23:27:36.684135    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.688726    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.689327    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:36.693474    8290 out.go:177]   - Using image docker.io/busybox:stable
	I0913 23:27:36.695038    8290 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0913 23:27:36.697461    8290 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:27:36.697483    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0913 23:27:36.697549    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:36.722990    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:37.195566    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0913 23:27:37.229982    8290 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0913 23:27:37.230064    8290 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0913 23:27:37.284506    8290 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0913 23:27:37.284584    8290 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0913 23:27:37.286844    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0913 23:27:37.297155    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0913 23:27:37.303921    8290 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0913 23:27:37.303998    8290 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0913 23:27:37.309922    8290 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0913 23:27:37.309998    8290 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0913 23:27:37.322799    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0913 23:27:37.406904    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0913 23:27:37.417585    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0913 23:27:37.474794    8290 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0913 23:27:37.474866    8290 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0913 23:27:37.483195    8290 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0913 23:27:37.483278    8290 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0913 23:27:37.547258    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0913 23:27:37.562665    8290 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0913 23:27:37.562756    8290 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0913 23:27:37.580409    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0913 23:27:37.629435    8290 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0913 23:27:37.629511    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0913 23:27:37.738742    8290 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0913 23:27:37.738823    8290 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0913 23:27:37.794670    8290 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:27:37.794741    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0913 23:27:37.988680    8290 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0913 23:27:37.988761    8290 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0913 23:27:38.029406    8290 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0913 23:27:38.029488    8290 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0913 23:27:38.124599    8290 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0913 23:27:38.124681    8290 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0913 23:27:38.173688    8290 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0913 23:27:38.173769    8290 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0913 23:27:38.237640    8290 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0913 23:27:38.237720    8290 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0913 23:27:38.260187    8290 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0913 23:27:38.260267    8290 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0913 23:27:38.270805    8290 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0913 23:27:38.270876    8290 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0913 23:27:38.297801    8290 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0913 23:27:38.297872    8290 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0913 23:27:38.302017    8290 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0913 23:27:38.302181    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0913 23:27:38.302308    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0913 23:27:38.379679    8290 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0913 23:27:38.379706    8290 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0913 23:27:38.406727    8290 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:27:38.406753    8290 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0913 23:27:38.431726    8290 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0913 23:27:38.431747    8290 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0913 23:27:38.442950    8290 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:27:38.443022    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0913 23:27:38.449401    8290 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0913 23:27:38.449478    8290 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0913 23:27:38.586357    8290 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0913 23:27:38.586429    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0913 23:27:38.610794    8290 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:38.610866    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0913 23:27:38.626914    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0913 23:27:38.700778    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0913 23:27:38.715851    8290 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0913 23:27:38.715923    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0913 23:27:38.727373    8290 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0913 23:27:38.727450    8290 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0913 23:27:38.794937    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:38.965800    8290 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:27:38.965879    8290 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0913 23:27:38.993702    8290 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0913 23:27:38.993780    8290 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0913 23:27:39.072109    8290 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:27:39.072182    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0913 23:27:39.272664    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0913 23:27:39.305655    8290 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.632100148s)
	I0913 23:27:39.306490    8290 node_ready.go:35] waiting up to 6m0s for node "addons-467916" to be "Ready" ...
	I0913 23:27:39.306749    8290 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.896977424s)
	I0913 23:27:39.306787    8290 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0913 23:27:39.311285    8290 node_ready.go:49] node "addons-467916" has status "Ready":"True"
	I0913 23:27:39.311361    8290 node_ready.go:38] duration metric: took 4.853325ms for node "addons-467916" to be "Ready" ...
	I0913 23:27:39.311385    8290 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:27:39.324400    8290 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace to be "Ready" ...
	I0913 23:27:39.324681    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0913 23:27:39.817551    8290 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-467916" context rescaled to 1 replicas
	I0913 23:27:41.338073    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:43.344439    8290 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0913 23:27:43.344546    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:43.370534    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:43.833625    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:44.082573    8290 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0913 23:27:44.363869    8290 addons.go:234] Setting addon gcp-auth=true in "addons-467916"
	I0913 23:27:44.363968    8290 host.go:66] Checking if "addons-467916" exists ...
	I0913 23:27:44.364533    8290 cli_runner.go:164] Run: docker container inspect addons-467916 --format={{.State.Status}}
	I0913 23:27:44.387876    8290 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0913 23:27:44.387927    8290 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467916
	I0913 23:27:44.409302    8290 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/addons-467916/id_rsa Username:docker}
	I0913 23:27:46.334861    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:46.823275    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.526043029s)
	I0913 23:27:46.823229    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.536301642s)
	I0913 23:27:46.824112    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.628470166s)
	I0913 23:27:46.824178    8290 addons.go:475] Verifying addon ingress=true in "addons-467916"
	I0913 23:27:46.827900    8290 out.go:177] * Verifying ingress addon...
	I0913 23:27:46.831255    8290 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0913 23:27:46.843742    8290 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0913 23:27:46.843814    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:47.348006    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:47.900218    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:48.467560    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:48.505983    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:48.856390    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:49.374547    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (12.051665892s)
	I0913 23:27:49.374616    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (11.967650073s)
	I0913 23:27:49.374677    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (11.957025849s)
	I0913 23:27:49.374726    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (11.827391201s)
	I0913 23:27:49.374773    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (11.79429732s)
	I0913 23:27:49.374941    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (11.072591187s)
	I0913 23:27:49.374960    8290 addons.go:475] Verifying addon registry=true in "addons-467916"
	I0913 23:27:49.375242    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.748245963s)
	I0913 23:27:49.375262    8290 addons.go:475] Verifying addon metrics-server=true in "addons-467916"
	I0913 23:27:49.375306    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (10.674454104s)
	I0913 23:27:49.375529    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (10.102781847s)
	I0913 23:27:49.375439    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (10.580423515s)
	W0913 23:27:49.375731    8290 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:27:49.375751    8290 retry.go:31] will retry after 158.990789ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0913 23:27:49.379120    8290 out.go:177] * Verifying registry addon...
	I0913 23:27:49.379237    8290 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-467916 service yakd-dashboard -n yakd-dashboard
	
	I0913 23:27:49.382960    8290 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0913 23:27:49.396746    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (10.072023753s)
	I0913 23:27:49.396781    8290 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-467916"
	I0913 23:27:49.397157    8290 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.009258116s)
	I0913 23:27:49.399241    8290 out.go:177] * Verifying csi-hostpath-driver addon...
	I0913 23:27:49.399376    8290 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0913 23:27:49.402518    8290 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0913 23:27:49.405441    8290 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0913 23:27:49.407877    8290 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0913 23:27:49.407904    8290 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0913 23:27:49.428464    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:49.470518    8290 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0913 23:27:49.470592    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:49.476704    8290 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0913 23:27:49.476776    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:49.483778    8290 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0913 23:27:49.483849    8290 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0913 23:27:49.535717    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0913 23:27:49.574193    8290 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:27:49.574222    8290 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0913 23:27:49.750965    8290 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0913 23:27:49.849721    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:49.887746    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:49.908028    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:50.335516    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:50.435430    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:50.437309    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:50.832924    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:50.839540    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:50.938826    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:50.940218    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:51.335753    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:51.437972    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:51.439832    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:51.770629    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.2348147s)
	I0913 23:27:51.770759    8290 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.019768615s)
	I0913 23:27:51.774345    8290 addons.go:475] Verifying addon gcp-auth=true in "addons-467916"
	I0913 23:27:51.777556    8290 out.go:177] * Verifying gcp-auth addon...
	I0913 23:27:51.781726    8290 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0913 23:27:51.784851    8290 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:27:51.886813    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:51.887776    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:51.907754    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:52.335477    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:52.386857    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:52.407732    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:52.835088    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:52.836645    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:52.886557    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:52.907462    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:53.336781    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:53.387889    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:53.408102    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:53.835378    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:53.887338    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:53.907594    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:54.386777    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:54.389003    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:54.487569    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:54.835411    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:54.886928    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:54.907976    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:55.331070    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:55.336206    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:55.387102    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:55.408895    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:55.836263    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:55.887517    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:55.908812    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:56.335686    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:56.387070    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:56.407872    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:56.835897    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:56.886817    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:56.907556    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:57.332360    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:57.336069    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:57.386543    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:57.407633    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:57.836028    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:57.886730    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:57.907239    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:58.335712    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:58.387573    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:58.407139    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:58.836018    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:58.889374    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:58.907104    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:59.332734    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:27:59.336236    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:59.387621    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:59.407064    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:27:59.835706    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:27:59.887613    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:27:59.907190    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:00.357526    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:00.414219    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:00.426634    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:00.835460    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:00.887121    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:00.908556    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:01.335406    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:01.392547    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:01.408845    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:01.831550    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:01.887244    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:01.888265    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:01.908763    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:02.335843    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:02.386437    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:02.413368    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:02.835534    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:02.887208    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:02.907975    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:03.335636    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:03.387310    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:03.409171    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:03.835854    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:03.887126    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:03.908462    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:04.330315    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:04.336013    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:04.387265    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:04.408218    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:04.835029    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:04.886732    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:04.908064    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:05.389065    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:05.390010    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:05.407615    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:05.845741    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:05.898854    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:05.909470    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:06.340196    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:06.390859    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:06.392732    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:06.408054    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:06.834846    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:06.887594    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:06.907127    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:07.336911    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:07.388105    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:07.408966    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:07.835737    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:07.887233    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:07.907518    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:08.334963    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:08.386262    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:08.408045    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:08.831119    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:08.835356    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:08.890055    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:08.908412    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:09.335981    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:09.387343    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:09.408311    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:09.835592    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:09.887157    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:09.908804    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:10.336135    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:10.386750    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:10.407468    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:10.831549    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:10.835116    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:10.887707    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0913 23:28:10.908910    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:11.382501    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.392410    8290 kapi.go:107] duration metric: took 22.00944668s to wait for kubernetes.io/minikube-addons=registry ...
	I0913 23:28:11.408424    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:11.835066    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:11.907799    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.387791    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.408213    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:12.831990    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:12.836383    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:12.908375    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.345176    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.411646    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:13.887772    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:13.908677    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.341672    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.407965    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:14.836524    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:14.842337    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:14.907396    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.344250    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.407984    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:15.887835    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:15.908011    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.336347    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.408074    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:16.836262    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:16.907543    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.331536    8290 pod_ready.go:103] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"False"
	I0913 23:28:17.337940    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.408136    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:17.844035    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:17.915096    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.337971    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.407721    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:18.835867    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:18.907146    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.331114    8290 pod_ready.go:93] pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:19.331186    8290 pod_ready.go:82] duration metric: took 40.006747537s for pod "coredns-7c65d6cfc9-7w5hb" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.331214    8290 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-zlw74" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.334013    8290 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-zlw74" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-zlw74" not found
	I0913 23:28:19.334079    8290 pod_ready.go:82] duration metric: took 2.843697ms for pod "coredns-7c65d6cfc9-zlw74" in "kube-system" namespace to be "Ready" ...
	E0913 23:28:19.334107    8290 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-zlw74" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-zlw74" not found
	I0913 23:28:19.334128    8290 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-467916" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.340492    8290 pod_ready.go:93] pod "etcd-addons-467916" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:19.340561    8290 pod_ready.go:82] duration metric: took 6.399859ms for pod "etcd-addons-467916" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.340587    8290 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-467916" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.345452    8290 pod_ready.go:93] pod "kube-apiserver-addons-467916" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:19.345521    8290 pod_ready.go:82] duration metric: took 4.91326ms for pod "kube-apiserver-addons-467916" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.345548    8290 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-467916" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.353611    8290 pod_ready.go:93] pod "kube-controller-manager-addons-467916" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:19.353673    8290 pod_ready.go:82] duration metric: took 8.102687ms for pod "kube-controller-manager-addons-467916" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.353699    8290 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-hgw86" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.387601    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.407410    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.528848    8290 pod_ready.go:93] pod "kube-proxy-hgw86" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:19.528915    8290 pod_ready.go:82] duration metric: took 175.194207ms for pod "kube-proxy-hgw86" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.528941    8290 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-467916" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.887706    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:19.907298    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:19.929440    8290 pod_ready.go:93] pod "kube-scheduler-addons-467916" in "kube-system" namespace has status "Ready":"True"
	I0913 23:28:19.929511    8290 pod_ready.go:82] duration metric: took 400.549184ms for pod "kube-scheduler-addons-467916" in "kube-system" namespace to be "Ready" ...
	I0913 23:28:19.929534    8290 pod_ready.go:39] duration metric: took 40.618120931s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0913 23:28:19.929580    8290 api_server.go:52] waiting for apiserver process to appear ...
	I0913 23:28:19.929657    8290 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:28:19.948612    8290 api_server.go:72] duration metric: took 43.921951115s to wait for apiserver process to appear ...
	I0913 23:28:19.948640    8290 api_server.go:88] waiting for apiserver healthz status ...
	I0913 23:28:19.948662    8290 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0913 23:28:19.960123    8290 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0913 23:28:19.961925    8290 api_server.go:141] control plane version: v1.31.1
	I0913 23:28:19.961955    8290 api_server.go:131] duration metric: took 13.307301ms to wait for apiserver health ...
	I0913 23:28:19.961965    8290 system_pods.go:43] waiting for kube-system pods to appear ...
	I0913 23:28:20.138300    8290 system_pods.go:59] 17 kube-system pods found
	I0913 23:28:20.138338    8290 system_pods.go:61] "coredns-7c65d6cfc9-7w5hb" [1a8e6fd6-e7ce-42bb-a0dc-17c7ef331368] Running
	I0913 23:28:20.138350    8290 system_pods.go:61] "csi-hostpath-attacher-0" [21d30727-a22c-493c-bee1-0f962e0783bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:20.138358    8290 system_pods.go:61] "csi-hostpath-resizer-0" [228807fd-6f3a-4202-847a-97def273edd0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:20.138368    8290 system_pods.go:61] "csi-hostpathplugin-fdq4p" [e088fb74-87f7-438a-8f3d-3dcff61456de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:20.138374    8290 system_pods.go:61] "etcd-addons-467916" [a31bc6c2-28b8-4ff9-a638-1d125a67e089] Running
	I0913 23:28:20.138380    8290 system_pods.go:61] "kube-apiserver-addons-467916" [462f0c74-5224-4f68-9c2e-e114bf29b749] Running
	I0913 23:28:20.138388    8290 system_pods.go:61] "kube-controller-manager-addons-467916" [b8b46b58-e4cd-43a1-bd9f-9817b89c2be1] Running
	I0913 23:28:20.138394    8290 system_pods.go:61] "kube-ingress-dns-minikube" [1ac07004-e505-4f58-b4fb-b3f28ae1857f] Running
	I0913 23:28:20.138405    8290 system_pods.go:61] "kube-proxy-hgw86" [273ffd46-671f-4979-b1f4-1c08d9f7087a] Running
	I0913 23:28:20.138409    8290 system_pods.go:61] "kube-scheduler-addons-467916" [74d2a1ca-15cf-48f5-9e03-1abfe46facdd] Running
	I0913 23:28:20.138416    8290 system_pods.go:61] "metrics-server-84c5f94fbc-cx9cr" [c35bd9f6-56a5-41e8-a831-926ba6fe5266] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:20.138427    8290 system_pods.go:61] "nvidia-device-plugin-daemonset-4zhpp" [425cd5c4-637d-474a-884b-c509d13eb5e5] Running
	I0913 23:28:20.138432    8290 system_pods.go:61] "registry-66c9cd494c-696bs" [7aa69fd1-c981-411c-bdd9-00cacd8b1736] Running
	I0913 23:28:20.138436    8290 system_pods.go:61] "registry-proxy-sgv4q" [a5a96a3a-0fec-4005-b778-df5a7261b085] Running
	I0913 23:28:20.138443    8290 system_pods.go:61] "snapshot-controller-56fcc65765-hfmpp" [efcea3c1-70d5-407c-9cec-4cc4c616c25b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:20.138453    8290 system_pods.go:61] "snapshot-controller-56fcc65765-qt5qc" [db54941b-e2e8-4753-ab92-95f051c14d7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:20.138457    8290 system_pods.go:61] "storage-provisioner" [cb1bc2b7-822a-40e3-bed6-6930d1f510d9] Running
	I0913 23:28:20.138464    8290 system_pods.go:74] duration metric: took 176.492626ms to wait for pod list to return data ...
	I0913 23:28:20.138476    8290 default_sa.go:34] waiting for default service account to be created ...
	I0913 23:28:20.329390    8290 default_sa.go:45] found service account: "default"
	I0913 23:28:20.329426    8290 default_sa.go:55] duration metric: took 190.935136ms for default service account to be created ...
	I0913 23:28:20.329440    8290 system_pods.go:116] waiting for k8s-apps to be running ...
	I0913 23:28:20.336262    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.407262    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:20.535946    8290 system_pods.go:86] 17 kube-system pods found
	I0913 23:28:20.535982    8290 system_pods.go:89] "coredns-7c65d6cfc9-7w5hb" [1a8e6fd6-e7ce-42bb-a0dc-17c7ef331368] Running
	I0913 23:28:20.535993    8290 system_pods.go:89] "csi-hostpath-attacher-0" [21d30727-a22c-493c-bee1-0f962e0783bd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0913 23:28:20.536000    8290 system_pods.go:89] "csi-hostpath-resizer-0" [228807fd-6f3a-4202-847a-97def273edd0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0913 23:28:20.536010    8290 system_pods.go:89] "csi-hostpathplugin-fdq4p" [e088fb74-87f7-438a-8f3d-3dcff61456de] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0913 23:28:20.536015    8290 system_pods.go:89] "etcd-addons-467916" [a31bc6c2-28b8-4ff9-a638-1d125a67e089] Running
	I0913 23:28:20.536020    8290 system_pods.go:89] "kube-apiserver-addons-467916" [462f0c74-5224-4f68-9c2e-e114bf29b749] Running
	I0913 23:28:20.536025    8290 system_pods.go:89] "kube-controller-manager-addons-467916" [b8b46b58-e4cd-43a1-bd9f-9817b89c2be1] Running
	I0913 23:28:20.536031    8290 system_pods.go:89] "kube-ingress-dns-minikube" [1ac07004-e505-4f58-b4fb-b3f28ae1857f] Running
	I0913 23:28:20.536035    8290 system_pods.go:89] "kube-proxy-hgw86" [273ffd46-671f-4979-b1f4-1c08d9f7087a] Running
	I0913 23:28:20.536040    8290 system_pods.go:89] "kube-scheduler-addons-467916" [74d2a1ca-15cf-48f5-9e03-1abfe46facdd] Running
	I0913 23:28:20.536053    8290 system_pods.go:89] "metrics-server-84c5f94fbc-cx9cr" [c35bd9f6-56a5-41e8-a831-926ba6fe5266] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0913 23:28:20.536058    8290 system_pods.go:89] "nvidia-device-plugin-daemonset-4zhpp" [425cd5c4-637d-474a-884b-c509d13eb5e5] Running
	I0913 23:28:20.536063    8290 system_pods.go:89] "registry-66c9cd494c-696bs" [7aa69fd1-c981-411c-bdd9-00cacd8b1736] Running
	I0913 23:28:20.536073    8290 system_pods.go:89] "registry-proxy-sgv4q" [a5a96a3a-0fec-4005-b778-df5a7261b085] Running
	I0913 23:28:20.536080    8290 system_pods.go:89] "snapshot-controller-56fcc65765-hfmpp" [efcea3c1-70d5-407c-9cec-4cc4c616c25b] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:20.536087    8290 system_pods.go:89] "snapshot-controller-56fcc65765-qt5qc" [db54941b-e2e8-4753-ab92-95f051c14d7a] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0913 23:28:20.536095    8290 system_pods.go:89] "storage-provisioner" [cb1bc2b7-822a-40e3-bed6-6930d1f510d9] Running
	I0913 23:28:20.536102    8290 system_pods.go:126] duration metric: took 206.655974ms to wait for k8s-apps to be running ...
	I0913 23:28:20.536112    8290 system_svc.go:44] waiting for kubelet service to be running ....
	I0913 23:28:20.536168    8290 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:28:20.552104    8290 system_svc.go:56] duration metric: took 15.984714ms WaitForService to wait for kubelet
	I0913 23:28:20.552132    8290 kubeadm.go:582] duration metric: took 44.525494577s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0913 23:28:20.552151    8290 node_conditions.go:102] verifying NodePressure condition ...
	I0913 23:28:20.729780    8290 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0913 23:28:20.729813    8290 node_conditions.go:123] node cpu capacity is 2
	I0913 23:28:20.729827    8290 node_conditions.go:105] duration metric: took 177.66982ms to run NodePressure ...
	I0913 23:28:20.729840    8290 start.go:241] waiting for startup goroutines ...
	I0913 23:28:20.836407    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:20.908133    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.336523    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.406914    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:21.837011    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:21.907890    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.418267    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.419941    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:22.889856    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:22.907317    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.340549    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.408698    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:23.835749    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:23.907752    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.336607    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.408877    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:24.836584    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:24.908249    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.336514    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.407147    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:25.850318    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:25.910764    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.335870    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.407656    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:26.835400    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:26.909876    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.336068    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.407970    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:27.835988    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:27.910815    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.335274    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.452983    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:28.835885    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:28.909843    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.336124    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.407897    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:29.836945    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:29.907607    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.336065    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.407616    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:30.835395    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:30.914693    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.336858    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.409779    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:31.835999    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:31.908024    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.336032    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.408459    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:32.836324    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:32.907710    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.336575    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.406956    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:33.835806    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:33.907275    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.389217    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.487913    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:34.837911    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:34.910428    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.337459    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.406710    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:35.836202    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:35.908956    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.335830    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.407898    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:36.836500    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:36.907016    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.336459    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.406698    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:37.887652    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:37.906990    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.336626    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.407118    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:38.894145    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:38.908804    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.335822    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.407647    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:39.836456    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:39.907398    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.339811    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.408954    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:40.835873    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:40.907506    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.335595    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.407352    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:41.836305    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:41.907903    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.388491    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.406758    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0913 23:28:42.835490    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:42.907024    8290 kapi.go:107] duration metric: took 53.50450364s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0913 23:28:43.335596    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:43.835850    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.335309    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:44.835323    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.339788    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:45.835522    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.336735    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:46.835146    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.336668    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:47.836005    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.335306    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:48.835419    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.334969    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:49.836371    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.336740    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:50.836924    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.336617    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:51.837363    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.335967    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:52.835561    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.335674    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:53.837828    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.336569    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:54.836097    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.335656    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:55.837284    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.337803    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:56.888543    8290 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0913 23:28:57.336498    8290 kapi.go:107] duration metric: took 1m10.505269822s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0913 23:29:13.785092    8290 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0913 23:29:13.785116    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.285889    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:14.786321    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.285683    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:15.785522    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.285332    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:16.785795    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.285356    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:17.786245    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.285193    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:18.785825    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.285966    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:19.786506    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.285376    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:20.784782    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.286189    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:21.786063    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.286088    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:22.785997    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.285717    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:23.785675    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:24.286264    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:24.785129    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.285891    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:25.785549    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.285493    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:26.785361    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.285410    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:27.785380    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.286675    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:28.785123    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.285150    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:29.785481    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:30.286356    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:30.785374    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:31.286046    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:31.785503    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:32.285662    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:32.786516    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:33.285561    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:33.784838    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:34.285885    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:34.786211    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:35.285370    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:35.784861    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:36.285426    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:36.786406    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:37.285555    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:37.785448    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:38.285716    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:38.785364    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:39.285424    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:39.785581    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:40.285309    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:40.785442    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:41.284970    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:41.785977    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:42.286362    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:42.786009    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:43.286642    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:43.784997    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:44.285829    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:44.785675    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:45.296900    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:45.786015    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:46.286234    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:46.785457    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:47.284921    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:47.786489    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:48.285756    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:48.786282    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:49.285189    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:49.786056    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:50.285751    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:50.785838    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:51.285654    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:51.792585    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:52.284928    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:52.785387    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:53.285215    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:53.785506    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:54.286017    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:54.793643    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:55.285782    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:55.785002    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:56.285098    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:56.785897    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:57.285161    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:57.785820    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:58.285972    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:58.785712    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:59.285745    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:29:59.785307    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:00.312235    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:00.787639    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:01.285529    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:01.785435    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:02.285551    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:02.785938    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:03.285903    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:03.784909    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:04.285489    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:04.785992    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:05.285654    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:05.785500    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:06.285347    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:06.786164    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:07.285660    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:07.785112    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:08.286042    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:08.785678    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:09.285739    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:09.784830    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:10.285242    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:10.785530    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:11.284748    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:11.785787    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:12.286373    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:12.786343    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:13.285244    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:13.786202    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:14.285894    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:14.785977    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:15.285890    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:15.785712    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:16.284823    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:16.785989    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:17.285769    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:17.785567    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:18.285479    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:18.786184    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:19.286234    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:19.785201    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:20.285473    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:20.785204    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:21.288528    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:21.785935    8290 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0913 23:30:22.285348    8290 kapi.go:107] duration metric: took 2m30.503620575s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0913 23:30:22.288266    8290 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-467916 cluster.
	I0913 23:30:22.291166    8290 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0913 23:30:22.293470    8290 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0913 23:30:22.296334    8290 out.go:177] * Enabled addons: cloud-spanner, default-storageclass, volcano, nvidia-device-plugin, ingress-dns, storage-provisioner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0913 23:30:22.299281    8290 addons.go:510] duration metric: took 2m46.272246782s for enable addons: enabled=[cloud-spanner default-storageclass volcano nvidia-device-plugin ingress-dns storage-provisioner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0913 23:30:22.299344    8290 start.go:246] waiting for cluster config update ...
	I0913 23:30:22.299368    8290 start.go:255] writing updated cluster config ...
	I0913 23:30:22.299672    8290 ssh_runner.go:195] Run: rm -f paused
	I0913 23:30:22.653489    8290 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0913 23:30:22.655413    8290 out.go:177] * Done! kubectl is now configured to use "addons-467916" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 13 23:39:26 addons-467916 dockerd[1285]: time="2024-09-13T23:39:26.629056696Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 13 23:39:27 addons-467916 cri-dockerd[1544]: time="2024-09-13T23:39:27Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 13 23:39:27 addons-467916 dockerd[1285]: time="2024-09-13T23:39:27.350862490Z" level=info msg="ignoring event" container=54662db5b73240d46b6613293a4f9a6ce0ca688d63ab275f59ef539216300f67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:29 addons-467916 dockerd[1285]: time="2024-09-13T23:39:29.100746782Z" level=info msg="ignoring event" container=2ba6cbacffe42144f138d2a566172c8ca0850f2f9e859668ee906b1d972e53fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:30 addons-467916 dockerd[1285]: time="2024-09-13T23:39:30.799040728Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:39:30 addons-467916 dockerd[1285]: time="2024-09-13T23:39:30.801608096Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:39:31 addons-467916 cri-dockerd[1544]: time="2024-09-13T23:39:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1a9ca758880e7c6e3a0b3300e812d7d9aa441ddf627b0b5818fbc58bf26bb408/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 13 23:39:32 addons-467916 cri-dockerd[1544]: time="2024-09-13T23:39:32Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 13 23:39:32 addons-467916 dockerd[1285]: time="2024-09-13T23:39:32.460192344Z" level=info msg="ignoring event" container=3b6ee86c417dc5c58b22ee4eb4114100691a25cf6574b8f3c20b3bba672df967 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:34 addons-467916 dockerd[1285]: time="2024-09-13T23:39:34.250743626Z" level=info msg="ignoring event" container=1a9ca758880e7c6e3a0b3300e812d7d9aa441ddf627b0b5818fbc58bf26bb408 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:36 addons-467916 cri-dockerd[1544]: time="2024-09-13T23:39:36Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/79116ac703552414e55cfb8f6fa9d2ae0c9c1c84a84229efa23c8232041a3443/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 13 23:39:36 addons-467916 dockerd[1285]: time="2024-09-13T23:39:36.489779023Z" level=info msg="ignoring event" container=d5b50901219be314ba34f4e431be4a5cdb99928b36a6d56bd28986d826d54cb7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:36 addons-467916 cri-dockerd[1544]: time="2024-09-13T23:39:36Z" level=info msg="Stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: Status: Image is up to date for ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec"
	Sep 13 23:39:38 addons-467916 dockerd[1285]: time="2024-09-13T23:39:38.326934621Z" level=info msg="ignoring event" container=d30927f55ca868fbba2daad992e3897eaec97ddcf7efb5ee31bd18ebcdfc60fd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:38 addons-467916 dockerd[1285]: time="2024-09-13T23:39:38.442880878Z" level=info msg="ignoring event" container=79116ac703552414e55cfb8f6fa9d2ae0c9c1c84a84229efa23c8232041a3443 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:39:56 addons-467916 dockerd[1285]: time="2024-09-13T23:39:56.736568601Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:39:56 addons-467916 dockerd[1285]: time="2024-09-13T23:39:56.738963817Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed"
	Sep 13 23:40:06 addons-467916 dockerd[1285]: time="2024-09-13T23:40:06.413486281Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=6e641d616a3819ed0b124053996c4ac0bc0ff2d62e044518554e99dad182746b
	Sep 13 23:40:06 addons-467916 dockerd[1285]: time="2024-09-13T23:40:06.442853598Z" level=info msg="ignoring event" container=6e641d616a3819ed0b124053996c4ac0bc0ff2d62e044518554e99dad182746b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:06 addons-467916 dockerd[1285]: time="2024-09-13T23:40:06.594799579Z" level=info msg="ignoring event" container=cb60f4e198dd1ea1fe8dd19e30fafe03eef3e1e259db97aba0396a8a609ac708 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:17 addons-467916 dockerd[1285]: time="2024-09-13T23:40:17.478448384Z" level=info msg="ignoring event" container=6bfc08a9ed056b967f4cfba3d372c3bf7f7dcf4fe342aa43040bdd3084603339 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-467916 dockerd[1285]: time="2024-09-13T23:40:18.171204564Z" level=info msg="ignoring event" container=4dcfffebb5f1779832cab591a1ece3f1b6ec3a5a72606f40fecf7587349e4875 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-467916 dockerd[1285]: time="2024-09-13T23:40:18.224060599Z" level=info msg="ignoring event" container=da9a19e162a3dda6dc0d4665e03997491f6495ccd0feb59be5d02a31c8fc5935 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-467916 dockerd[1285]: time="2024-09-13T23:40:18.371158978Z" level=info msg="ignoring event" container=fe17bfd1708160b59c90fa3b9c6e64d73998f1c0992c54d09bc32cd38317282c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 13 23:40:18 addons-467916 dockerd[1285]: time="2024-09-13T23:40:18.473365842Z" level=info msg="ignoring event" container=3cdad3daad7161e68c449de957d43ccbd1a02c62cedc8e6b13a565238ead571e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	d30927f55ca86       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec                            43 seconds ago      Exited              gadget                                   7                   8889f86811930       gadget-pfpfw
	d5b50901219be       fc9db2894f4e4                                                                                                                                43 seconds ago      Exited              helper-pod                               0                   79116ac703552       helper-pod-delete-pvc-57a7b523-7db8-4825-ad64-698dbbbd6c68
	3b6ee86c417dc       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                                              47 seconds ago      Exited              busybox                                  0                   1a9ca758880e7       test-local-path
	33f3c35bcef62       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 9 minutes ago       Running             gcp-auth                                 0                   7b3e0ee52645d       gcp-auth-89d5ffd79-5vxqg
	5e956531f7255       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce                             11 minutes ago      Running             controller                               0                   f2b750e1cbf2d       ingress-nginx-controller-bc57996ff-prb6k
	9196c309ccf65       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          11 minutes ago      Running             csi-snapshotter                          0                   f42d9364a230d       csi-hostpathplugin-fdq4p
	cefb2f33ee2e8       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          11 minutes ago      Running             csi-provisioner                          0                   f42d9364a230d       csi-hostpathplugin-fdq4p
	7bf8b2759b8fd       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            11 minutes ago      Running             liveness-probe                           0                   f42d9364a230d       csi-hostpathplugin-fdq4p
	4e10f94c29b68       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           11 minutes ago      Running             hostpath                                 0                   f42d9364a230d       csi-hostpathplugin-fdq4p
	fd17131ca41c1       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                11 minutes ago      Running             node-driver-registrar                    0                   f42d9364a230d       csi-hostpathplugin-fdq4p
	5cd0105ac6ed4       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   11 minutes ago      Running             csi-external-health-monitor-controller   0                   f42d9364a230d       csi-hostpathplugin-fdq4p
	ab2ad6d46e82c       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              11 minutes ago      Running             csi-resizer                              0                   3750b8886c569       csi-hostpath-resizer-0
	8884fcbf5cae1       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             11 minutes ago      Running             csi-attacher                             0                   875929a374271       csi-hostpath-attacher-0
	6777d39ef86fc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              create                                   0                   b4305bfb473a3       ingress-nginx-admission-create-dlp7n
	402383d608f08       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3                   11 minutes ago      Exited              patch                                    0                   9b0557a12dff9       ingress-nginx-admission-patch-q6ltk
	a78bf53183715       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   1207926fbcdfc       snapshot-controller-56fcc65765-hfmpp
	e869792a87bbe       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      11 minutes ago      Running             volume-snapshot-controller               0                   01baabcdd836e       snapshot-controller-56fcc65765-qt5qc
	cd301ceae7031       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9                        12 minutes ago      Running             metrics-server                           0                   16c0ebde5c0b6       metrics-server-84c5f94fbc-cx9cr
	5efe4e5f026ba       gcr.io/cloud-spanner-emulator/emulator@sha256:636fdfc528824bae5f0ea2eca6ae307fe81092f05ec21038008bc0d6100e52fc                               12 minutes ago      Running             cloud-spanner-emulator                   0                   f04f7caabd85a       cloud-spanner-emulator-769b77f747-k6w6l
	35967e8495f1e       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c                             12 minutes ago      Running             minikube-ingress-dns                     0                   2a3b36f0c653b       kube-ingress-dns-minikube
	22405ec6ec813       ba04bb24b9575                                                                                                                                12 minutes ago      Running             storage-provisioner                      0                   4837248c997ad       storage-provisioner
	a57304e14f012       2f6c962e7b831                                                                                                                                12 minutes ago      Running             coredns                                  0                   78389fd9e58f2       coredns-7c65d6cfc9-7w5hb
	937776f96a500       24a140c548c07                                                                                                                                12 minutes ago      Running             kube-proxy                               0                   5201a96a72bf3       kube-proxy-hgw86
	a11957d28d9af       279f381cb3736                                                                                                                                12 minutes ago      Running             kube-controller-manager                  0                   140730838c38a       kube-controller-manager-addons-467916
	8b0dcc71f91eb       d3f53a98c0a9d                                                                                                                                12 minutes ago      Running             kube-apiserver                           0                   ad423d966da1d       kube-apiserver-addons-467916
	a6183f4a9c5f3       7f8aa378bb47d                                                                                                                                12 minutes ago      Running             kube-scheduler                           0                   16f5cf3468c00       kube-scheduler-addons-467916
	55bb578134539       27e3830e14027                                                                                                                                12 minutes ago      Running             etcd                                     0                   3ec7f4a9395f2       etcd-addons-467916
	
	
	==> controller_ingress [5e956531f725] <==
	NGINX Ingress controller
	  Release:       v1.11.2
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	I0913 23:28:56.962887       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0913 23:28:57.780943       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0913 23:28:57.799765       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0913 23:28:57.813883       7 nginx.go:271] "Starting NGINX Ingress controller"
	I0913 23:28:57.831409       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"75f58484-7992-4bfc-ab1d-4a2291fb38a0", APIVersion:"v1", ResourceVersion:"700", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0913 23:28:57.842253       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"23810536-ae54-4beb-ac58-8a9dbb323959", APIVersion:"v1", ResourceVersion:"701", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0913 23:28:57.842532       7 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"9657921f-f9bc-4c96-b98f-8a829d9093b6", APIVersion:"v1", ResourceVersion:"702", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0913 23:28:59.015980       7 nginx.go:317] "Starting NGINX process"
	I0913 23:28:59.016081       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0913 23:28:59.016633       7 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0913 23:28:59.016814       7 controller.go:193] "Configuration changes detected, backend reload required"
	I0913 23:28:59.031809       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0913 23:28:59.032071       7 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-prb6k"
	I0913 23:28:59.045142       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-prb6k" node="addons-467916"
	I0913 23:28:59.069448       7 controller.go:213] "Backend successfully reloaded"
	I0913 23:28:59.069576       7 controller.go:224] "Initial sync, sleeping for 1 second"
	I0913 23:28:59.070034       7 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-prb6k", UID:"431a3ca8-a7ec-4bc1-8304-084a4a299ecc", APIVersion:"v1", ResourceVersion:"1281", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	
	
	==> coredns [a57304e14f01] <==
	[INFO] 10.244.0.7:59691 - 35490 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000122543s
	[INFO] 10.244.0.7:54502 - 54359 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002421077s
	[INFO] 10.244.0.7:54502 - 39769 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00209865s
	[INFO] 10.244.0.7:55183 - 28628 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159983s
	[INFO] 10.244.0.7:55183 - 38105 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00010921s
	[INFO] 10.244.0.7:51605 - 15994 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000152713s
	[INFO] 10.244.0.7:51605 - 56333 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000075126s
	[INFO] 10.244.0.7:48040 - 58675 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000266649s
	[INFO] 10.244.0.7:48040 - 43061 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000081961s
	[INFO] 10.244.0.7:57644 - 61791 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000058199s
	[INFO] 10.244.0.7:57644 - 64605 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000052988s
	[INFO] 10.244.0.7:34739 - 12124 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001734844s
	[INFO] 10.244.0.7:34739 - 26202 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001546973s
	[INFO] 10.244.0.7:50361 - 5348 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000098018s
	[INFO] 10.244.0.7:50361 - 4582 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000077612s
	[INFO] 10.244.0.25:55979 - 52101 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000254572s
	[INFO] 10.244.0.25:34742 - 15603 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000147783s
	[INFO] 10.244.0.25:47185 - 44832 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000139421s
	[INFO] 10.244.0.25:48915 - 36200 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000124972s
	[INFO] 10.244.0.25:35028 - 25377 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000105239s
	[INFO] 10.244.0.25:57982 - 36897 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000101415s
	[INFO] 10.244.0.25:39169 - 55109 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002517401s
	[INFO] 10.244.0.25:53461 - 60602 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002012639s
	[INFO] 10.244.0.25:35539 - 9059 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002301427s
	[INFO] 10.244.0.25:57000 - 1806 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.001804328s
	
	
	==> describe nodes <==
	Name:               addons-467916
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-467916
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7ca96ba7bd97af6e0063398921096f1cca785d18
	                    minikube.k8s.io/name=addons-467916
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_13T23_27_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-467916
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-467916"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Sep 2024 23:27:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-467916
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Sep 2024 23:40:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Sep 2024 23:40:06 +0000   Fri, 13 Sep 2024 23:27:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Sep 2024 23:40:06 +0000   Fri, 13 Sep 2024 23:27:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Sep 2024 23:40:06 +0000   Fri, 13 Sep 2024 23:27:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Sep 2024 23:40:06 +0000   Fri, 13 Sep 2024 23:27:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-467916
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022292Ki
	  pods:               110
	System Info:
	  Machine ID:                 f9aa28e588d64befb2a416a935cb05a8
	  System UUID:                af8b1151-b68f-4ca2-ad7a-4c4132769083
	  Boot ID:                    5a347e92-47ff-4e1d-8cee-4b20921cd8ac
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.2.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (19 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-769b77f747-k6w6l     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-pfpfw                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-5vxqg                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-prb6k    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-7w5hb                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 csi-hostpathplugin-fdq4p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 etcd-addons-467916                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-467916                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-467916       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-hgw86                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-467916                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-cx9cr             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-hfmpp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 snapshot-controller-56fcc65765-qt5qc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node addons-467916 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x7 over 12m)  kubelet          Node addons-467916 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)  kubelet          Node addons-467916 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-467916 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-467916 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-467916 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-467916 event: Registered Node addons-467916 in Controller
	
	
	==> dmesg <==
	[Sep13 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014819] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495879] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.745347] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.786510] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [55bb57813453] <==
	{"level":"info","ts":"2024-09-13T23:27:24.948895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-09-13T23:27:24.948996Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-09-13T23:27:25.320344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-13T23:27:25.320467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-13T23:27:25.320503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-13T23:27:25.320574Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-13T23:27:25.320608Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-13T23:27:25.320698Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-13T23:27:25.320773Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-13T23:27:25.324481Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-467916 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-13T23:27:25.324774Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:25.325049Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:27:25.325587Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-13T23:27:25.326592Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:27:25.337118Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-13T23:27:25.344337Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-13T23:27:25.344577Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-13T23:27:25.345702Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-13T23:27:25.348470Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:25.348801Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:25.348951Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-13T23:27:25.357331Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-13T23:37:26.049950Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1888}
	{"level":"info","ts":"2024-09-13T23:37:26.109143Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1888,"took":"58.441643ms","hash":3804136292,"current-db-size-bytes":8863744,"current-db-size":"8.9 MB","current-db-size-in-use-bytes":4956160,"current-db-size-in-use":"5.0 MB"}
	{"level":"info","ts":"2024-09-13T23:37:26.109203Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3804136292,"revision":1888,"compact-revision":-1}
	
	
	==> gcp-auth [33f3c35bcef6] <==
	2024/09/13 23:30:21 GCP Auth Webhook started!
	2024/09/13 23:30:39 Ready to marshal response ...
	2024/09/13 23:30:39 Ready to write response ...
	2024/09/13 23:30:39 Ready to marshal response ...
	2024/09/13 23:30:39 Ready to write response ...
	2024/09/13 23:31:03 Ready to marshal response ...
	2024/09/13 23:31:03 Ready to write response ...
	2024/09/13 23:31:04 Ready to marshal response ...
	2024/09/13 23:31:04 Ready to write response ...
	2024/09/13 23:31:04 Ready to marshal response ...
	2024/09/13 23:31:04 Ready to write response ...
	2024/09/13 23:39:17 Ready to marshal response ...
	2024/09/13 23:39:17 Ready to write response ...
	2024/09/13 23:39:25 Ready to marshal response ...
	2024/09/13 23:39:25 Ready to write response ...
	2024/09/13 23:39:26 Ready to marshal response ...
	2024/09/13 23:39:26 Ready to write response ...
	2024/09/13 23:39:35 Ready to marshal response ...
	2024/09/13 23:39:35 Ready to write response ...
	
	
	==> kernel <==
	 23:40:19 up 22 min,  0 users,  load average: 0.63, 0.59, 0.52
	Linux addons-467916 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kube-apiserver [8b0dcc71f91e] <==
	E0913 23:29:54.563116       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.10.105:443: connect: connection refused" logger="UnhandledError"
	W0913 23:29:54.621279       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.107.10.105:443: connect: connection refused
	E0913 23:29:54.621326       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.107.10.105:443: connect: connection refused" logger="UnhandledError"
	I0913 23:30:39.208585       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0913 23:30:39.241009       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	I0913 23:30:53.979166       1 handler.go:286] Adding GroupVersion batch.volcano.sh v1alpha1 to ResourceManager
	I0913 23:30:54.079561       1 handler.go:286] Adding GroupVersion bus.volcano.sh v1alpha1 to ResourceManager
	I0913 23:30:54.433898       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 23:30:54.499309       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 23:30:54.516900       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0913 23:30:54.537496       1 handler.go:286] Adding GroupVersion nodeinfo.volcano.sh v1alpha1 to ResourceManager
	I0913 23:30:55.013833       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 23:30:55.032982       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0913 23:30:55.107283       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0913 23:30:55.298358       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0913 23:30:55.518009       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0913 23:30:55.763033       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0913 23:30:55.763033       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0913 23:30:55.817816       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0913 23:30:56.108031       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0913 23:30:56.433190       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	E0913 23:39:36.660216       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 23:39:36.672145       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 23:39:36.687509       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	E0913 23:39:51.684378       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [a11957d28d9a] <==
	I0913 23:39:23.953252       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W0913 23:39:25.446320       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:25.446372       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:27.955072       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:27.955112       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:29.503514       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:29.503558       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:29.998642       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:29.998693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:39:35.987677       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-467916"
	I0913 23:39:36.374245       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="5.038µs"
	W0913 23:39:43.402769       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:43.402969       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:52.598891       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:52.598932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:39:58.850107       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:39:58.850149       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:00.053132       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:00.053178       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:00.715255       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:00.715306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0913 23:40:01.943428       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0913 23:40:01.943503       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0913 23:40:06.589829       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-467916"
	I0913 23:40:18.087068       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.227µs"
	
	
	==> kube-proxy [937776f96a50] <==
	I0913 23:27:38.049608       1 server_linux.go:66] "Using iptables proxy"
	I0913 23:27:38.158981       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0913 23:27:38.159043       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0913 23:27:38.206789       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0913 23:27:38.206845       1 server_linux.go:169] "Using iptables Proxier"
	I0913 23:27:38.209987       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0913 23:27:38.210256       1 server.go:483] "Version info" version="v1.31.1"
	I0913 23:27:38.210271       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:27:38.220707       1 config.go:199] "Starting service config controller"
	I0913 23:27:38.220748       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0913 23:27:38.220773       1 config.go:105] "Starting endpoint slice config controller"
	I0913 23:27:38.220778       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0913 23:27:38.221283       1 config.go:328] "Starting node config controller"
	I0913 23:27:38.221294       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0913 23:27:38.321015       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0913 23:27:38.321065       1 shared_informer.go:320] Caches are synced for service config
	I0913 23:27:38.321343       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a6183f4a9c5f] <==
	I0913 23:27:27.807428       1 serving.go:386] Generated self-signed cert in-memory
	I0913 23:27:30.465087       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.1"
	I0913 23:27:30.465487       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0913 23:27:30.474758       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0913 23:27:30.477232       1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController
	I0913 23:27:30.477500       1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0913 23:27:30.477656       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0913 23:27:30.481405       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0913 23:27:30.482475       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 23:27:30.481678       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0913 23:27:30.482770       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0913 23:27:30.577707       1 shared_informer.go:320] Caches are synced for RequestHeaderAuthRequestController
	I0913 23:27:30.583113       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0913 23:27:30.583910       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Sep 13 23:40:11 addons-467916 kubelet[2359]: I0913 23:40:11.541360    2359 scope.go:117] "RemoveContainer" containerID="d30927f55ca868fbba2daad992e3897eaec97ddcf7efb5ee31bd18ebcdfc60fd"
	Sep 13 23:40:11 addons-467916 kubelet[2359]: E0913 23:40:11.542054    2359 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=gadget pod=gadget-pfpfw_gadget(f1a9d96c-1db0-4cd8-90ec-9e7b378ddf3c)\"" pod="gadget/gadget-pfpfw" podUID="f1a9d96c-1db0-4cd8-90ec-9e7b378ddf3c"
	Sep 13 23:40:17 addons-467916 kubelet[2359]: I0913 23:40:17.736452    2359 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-478p6\" (UniqueName: \"kubernetes.io/projected/5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2-kube-api-access-478p6\") pod \"5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2\" (UID: \"5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2\") "
	Sep 13 23:40:17 addons-467916 kubelet[2359]: I0913 23:40:17.736523    2359 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2-gcp-creds\") pod \"5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2\" (UID: \"5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2\") "
	Sep 13 23:40:17 addons-467916 kubelet[2359]: I0913 23:40:17.736643    2359 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2" (UID: "5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 13 23:40:17 addons-467916 kubelet[2359]: I0913 23:40:17.738489    2359 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2-kube-api-access-478p6" (OuterVolumeSpecName: "kube-api-access-478p6") pod "5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2" (UID: "5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2"). InnerVolumeSpecName "kube-api-access-478p6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:40:17 addons-467916 kubelet[2359]: I0913 23:40:17.837539    2359 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2-gcp-creds\") on node \"addons-467916\" DevicePath \"\""
	Sep 13 23:40:17 addons-467916 kubelet[2359]: I0913 23:40:17.837596    2359 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-478p6\" (UniqueName: \"kubernetes.io/projected/5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2-kube-api-access-478p6\") on node \"addons-467916\" DevicePath \"\""
	Sep 13 23:40:18 addons-467916 kubelet[2359]: I0913 23:40:18.543452    2359 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mv5pp\" (UniqueName: \"kubernetes.io/projected/7aa69fd1-c981-411c-bdd9-00cacd8b1736-kube-api-access-mv5pp\") pod \"7aa69fd1-c981-411c-bdd9-00cacd8b1736\" (UID: \"7aa69fd1-c981-411c-bdd9-00cacd8b1736\") "
	Sep 13 23:40:18 addons-467916 kubelet[2359]: I0913 23:40:18.555421    2359 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7aa69fd1-c981-411c-bdd9-00cacd8b1736-kube-api-access-mv5pp" (OuterVolumeSpecName: "kube-api-access-mv5pp") pod "7aa69fd1-c981-411c-bdd9-00cacd8b1736" (UID: "7aa69fd1-c981-411c-bdd9-00cacd8b1736"). InnerVolumeSpecName "kube-api-access-mv5pp". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:40:18 addons-467916 kubelet[2359]: I0913 23:40:18.644784    2359 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjnqc\" (UniqueName: \"kubernetes.io/projected/a5a96a3a-0fec-4005-b778-df5a7261b085-kube-api-access-cjnqc\") pod \"a5a96a3a-0fec-4005-b778-df5a7261b085\" (UID: \"a5a96a3a-0fec-4005-b778-df5a7261b085\") "
	Sep 13 23:40:18 addons-467916 kubelet[2359]: I0913 23:40:18.644902    2359 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-mv5pp\" (UniqueName: \"kubernetes.io/projected/7aa69fd1-c981-411c-bdd9-00cacd8b1736-kube-api-access-mv5pp\") on node \"addons-467916\" DevicePath \"\""
	Sep 13 23:40:18 addons-467916 kubelet[2359]: I0913 23:40:18.658554    2359 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5a96a3a-0fec-4005-b778-df5a7261b085-kube-api-access-cjnqc" (OuterVolumeSpecName: "kube-api-access-cjnqc") pod "a5a96a3a-0fec-4005-b778-df5a7261b085" (UID: "a5a96a3a-0fec-4005-b778-df5a7261b085"). InnerVolumeSpecName "kube-api-access-cjnqc". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 13 23:40:18 addons-467916 kubelet[2359]: I0913 23:40:18.746482    2359 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-cjnqc\" (UniqueName: \"kubernetes.io/projected/a5a96a3a-0fec-4005-b778-df5a7261b085-kube-api-access-cjnqc\") on node \"addons-467916\" DevicePath \"\""
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.030469    2359 scope.go:117] "RemoveContainer" containerID="4dcfffebb5f1779832cab591a1ece3f1b6ec3a5a72606f40fecf7587349e4875"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.080064    2359 scope.go:117] "RemoveContainer" containerID="4dcfffebb5f1779832cab591a1ece3f1b6ec3a5a72606f40fecf7587349e4875"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: E0913 23:40:19.084945    2359 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4dcfffebb5f1779832cab591a1ece3f1b6ec3a5a72606f40fecf7587349e4875" containerID="4dcfffebb5f1779832cab591a1ece3f1b6ec3a5a72606f40fecf7587349e4875"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.084986    2359 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4dcfffebb5f1779832cab591a1ece3f1b6ec3a5a72606f40fecf7587349e4875"} err="failed to get container status \"4dcfffebb5f1779832cab591a1ece3f1b6ec3a5a72606f40fecf7587349e4875\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4dcfffebb5f1779832cab591a1ece3f1b6ec3a5a72606f40fecf7587349e4875"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.085008    2359 scope.go:117] "RemoveContainer" containerID="da9a19e162a3dda6dc0d4665e03997491f6495ccd0feb59be5d02a31c8fc5935"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.111367    2359 scope.go:117] "RemoveContainer" containerID="da9a19e162a3dda6dc0d4665e03997491f6495ccd0feb59be5d02a31c8fc5935"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: E0913 23:40:19.112519    2359 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: da9a19e162a3dda6dc0d4665e03997491f6495ccd0feb59be5d02a31c8fc5935" containerID="da9a19e162a3dda6dc0d4665e03997491f6495ccd0feb59be5d02a31c8fc5935"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.112552    2359 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"da9a19e162a3dda6dc0d4665e03997491f6495ccd0feb59be5d02a31c8fc5935"} err="failed to get container status \"da9a19e162a3dda6dc0d4665e03997491f6495ccd0feb59be5d02a31c8fc5935\": rpc error: code = Unknown desc = Error response from daemon: No such container: da9a19e162a3dda6dc0d4665e03997491f6495ccd0feb59be5d02a31c8fc5935"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.552168    2359 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2" path="/var/lib/kubelet/pods/5bcfe2f0-9700-4683-bf69-c07a4d0cb9c2/volumes"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.552661    2359 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7aa69fd1-c981-411c-bdd9-00cacd8b1736" path="/var/lib/kubelet/pods/7aa69fd1-c981-411c-bdd9-00cacd8b1736/volumes"
	Sep 13 23:40:19 addons-467916 kubelet[2359]: I0913 23:40:19.553339    2359 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5a96a3a-0fec-4005-b778-df5a7261b085" path="/var/lib/kubelet/pods/a5a96a3a-0fec-4005-b778-df5a7261b085/volumes"
	
	
	==> storage-provisioner [22405ec6ec81] <==
	I0913 23:27:43.325891       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0913 23:27:43.365135       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0913 23:27:43.365185       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0913 23:27:43.384125       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0913 23:27:43.384379       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dd36235c-6978-47cd-9345-3198b4fd9f69", APIVersion:"v1", ResourceVersion:"597", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-467916_26622722-f067-4b70-bf16-4657722839b7 became leader
	I0913 23:27:43.384931       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-467916_26622722-f067-4b70-bf16-4657722839b7!
	I0913 23:27:43.485424       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-467916_26622722-f067-4b70-bf16-4657722839b7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-467916 -n addons-467916
helpers_test.go:261: (dbg) Run:  kubectl --context addons-467916 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-dlp7n ingress-nginx-admission-patch-q6ltk
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-467916 describe pod busybox ingress-nginx-admission-create-dlp7n ingress-nginx-admission-patch-q6ltk
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-467916 describe pod busybox ingress-nginx-admission-create-dlp7n ingress-nginx-admission-patch-q6ltk: exit status 1 (109.263905ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-467916/192.168.49.2
	Start Time:       Fri, 13 Sep 2024 23:31:04 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tp6tx (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-tp6tx:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m16s                   default-scheduler  Successfully assigned default/busybox to addons-467916
	  Normal   Pulling    8m (x4 over 9m16s)      kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     8m (x4 over 9m16s)      kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     8m (x4 over 9m16s)      kubelet            Error: ErrImagePull
	  Warning  Failed     7m33s (x6 over 9m15s)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m15s (x20 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-dlp7n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-q6ltk" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-467916 describe pod busybox ingress-nginx-admission-create-dlp7n ingress-nginx-admission-patch-q6ltk: exit status 1
--- FAIL: TestAddons/parallel/Registry (73.52s)

                                                
                                    

Test pass (318/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.45
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.91
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.57
22 TestOffline 86.68
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 223.14
29 TestAddons/serial/Volcano 41.07
31 TestAddons/serial/GCPAuth/Namespaces 0.18
34 TestAddons/parallel/Ingress 20.41
35 TestAddons/parallel/InspektorGadget 10.85
36 TestAddons/parallel/MetricsServer 6.68
39 TestAddons/parallel/CSI 63.88
40 TestAddons/parallel/Headlamp 17.66
41 TestAddons/parallel/CloudSpanner 5.64
42 TestAddons/parallel/LocalPath 53.65
43 TestAddons/parallel/NvidiaDevicePlugin 6.45
44 TestAddons/parallel/Yakd 11.93
45 TestAddons/StoppedEnableDisable 6.11
46 TestCertOptions 34.59
47 TestCertExpiration 250.76
48 TestDockerFlags 35.34
49 TestForceSystemdFlag 41.57
50 TestForceSystemdEnv 45.81
56 TestErrorSpam/setup 34.09
57 TestErrorSpam/start 0.73
58 TestErrorSpam/status 0.96
59 TestErrorSpam/pause 1.33
60 TestErrorSpam/unpause 1.46
61 TestErrorSpam/stop 10.95
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 43.61
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 37.58
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.1
72 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
73 TestFunctional/serial/CacheCmd/cache/add_local 0.96
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
77 TestFunctional/serial/CacheCmd/cache/cache_reload 1.63
78 TestFunctional/serial/CacheCmd/cache/delete 0.11
79 TestFunctional/serial/MinikubeKubectlCmd 0.15
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
81 TestFunctional/serial/ExtraConfig 39.17
82 TestFunctional/serial/ComponentHealth 0.12
83 TestFunctional/serial/LogsCmd 1.16
84 TestFunctional/serial/LogsFileCmd 1.24
85 TestFunctional/serial/InvalidService 4.59
87 TestFunctional/parallel/ConfigCmd 0.47
88 TestFunctional/parallel/DashboardCmd 10.7
89 TestFunctional/parallel/DryRun 0.43
90 TestFunctional/parallel/InternationalLanguage 0.19
91 TestFunctional/parallel/StatusCmd 1.17
95 TestFunctional/parallel/ServiceCmdConnect 11.67
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 25.37
99 TestFunctional/parallel/SSHCmd 0.7
100 TestFunctional/parallel/CpCmd 2.29
102 TestFunctional/parallel/FileSync 0.38
103 TestFunctional/parallel/CertSync 2.1
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.31
111 TestFunctional/parallel/License 0.28
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.58
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.51
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 7.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.41
125 TestFunctional/parallel/ProfileCmd/profile_list 0.38
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
127 TestFunctional/parallel/MountCmd/any-port 8.33
128 TestFunctional/parallel/ServiceCmd/List 0.62
129 TestFunctional/parallel/ServiceCmd/JSONOutput 0.57
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
131 TestFunctional/parallel/ServiceCmd/Format 0.36
132 TestFunctional/parallel/ServiceCmd/URL 0.39
133 TestFunctional/parallel/MountCmd/specific-port 2.16
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.76
135 TestFunctional/parallel/Version/short 0.09
136 TestFunctional/parallel/Version/components 1.13
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.28
142 TestFunctional/parallel/ImageCommands/Setup 0.77
143 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.22
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.2
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.17
148 TestFunctional/parallel/DockerEnv/bash 1.26
149 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.25
150 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
151 TestFunctional/parallel/ImageCommands/ImageRemove 0.52
152 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.77
153 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
154 TestFunctional/delete_echo-server_images 0.04
155 TestFunctional/delete_my-image_image 0.01
156 TestFunctional/delete_minikube_cached_images 0.02
160 TestMultiControlPlane/serial/StartCluster 122.42
161 TestMultiControlPlane/serial/DeployApp 7.69
162 TestMultiControlPlane/serial/PingHostFromPods 1.73
163 TestMultiControlPlane/serial/AddWorkerNode 25.26
164 TestMultiControlPlane/serial/NodeLabels 0.1
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
166 TestMultiControlPlane/serial/CopyFile 19.15
167 TestMultiControlPlane/serial/StopSecondaryNode 11.8
168 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.54
169 TestMultiControlPlane/serial/RestartSecondaryNode 41.23
170 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 4.29
171 TestMultiControlPlane/serial/RestartClusterKeepsNodes 172.14
172 TestMultiControlPlane/serial/DeleteSecondaryNode 11.58
173 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
174 TestMultiControlPlane/serial/StopCluster 32.95
175 TestMultiControlPlane/serial/RestartCluster 154.69
176 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.53
177 TestMultiControlPlane/serial/AddSecondaryNode 45.71
178 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.82
181 TestImageBuild/serial/Setup 31.87
182 TestImageBuild/serial/NormalBuild 1.74
183 TestImageBuild/serial/BuildWithBuildArg 0.94
184 TestImageBuild/serial/BuildWithDockerIgnore 0.77
185 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.92
189 TestJSONOutput/start/Command 42.31
190 TestJSONOutput/start/Audit 0
192 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/pause/Command 0.64
196 TestJSONOutput/pause/Audit 0
198 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/unpause/Command 0.54
202 TestJSONOutput/unpause/Audit 0
204 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
207 TestJSONOutput/stop/Command 5.75
208 TestJSONOutput/stop/Audit 0
210 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
211 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
212 TestErrorJSONOutput 0.22
214 TestKicCustomNetwork/create_custom_network 32.22
215 TestKicCustomNetwork/use_default_bridge_network 35.18
216 TestKicExistingNetwork 34.62
217 TestKicCustomSubnet 33.21
218 TestKicStaticIP 34.07
219 TestMainNoArgs 0.07
220 TestMinikubeProfile 73.68
223 TestMountStart/serial/StartWithMountFirst 7.87
224 TestMountStart/serial/VerifyMountFirst 0.24
225 TestMountStart/serial/StartWithMountSecond 8.01
226 TestMountStart/serial/VerifyMountSecond 0.25
227 TestMountStart/serial/DeleteFirst 1.48
228 TestMountStart/serial/VerifyMountPostDelete 0.25
229 TestMountStart/serial/Stop 1.21
230 TestMountStart/serial/RestartStopped 8.29
231 TestMountStart/serial/VerifyMountPostStop 0.26
234 TestMultiNode/serial/FreshStart2Nodes 86.9
235 TestMultiNode/serial/DeployApp2Nodes 39
236 TestMultiNode/serial/PingHostFrom2Pods 1.04
237 TestMultiNode/serial/AddNode 17.46
238 TestMultiNode/serial/MultiNodeLabels 0.13
239 TestMultiNode/serial/ProfileList 0.37
240 TestMultiNode/serial/CopyFile 10.07
241 TestMultiNode/serial/StopNode 2.25
242 TestMultiNode/serial/StartAfterStop 11.02
243 TestMultiNode/serial/RestartKeepsNodes 96.95
244 TestMultiNode/serial/DeleteNode 5.71
245 TestMultiNode/serial/StopMultiNode 21.72
246 TestMultiNode/serial/RestartMultiNode 54.97
247 TestMultiNode/serial/ValidateNameConflict 37.62
252 TestPreload 137.83
254 TestScheduledStopUnix 106.39
255 TestSkaffold 120.31
257 TestInsufficientStorage 10.63
258 TestRunningBinaryUpgrade 95.93
260 TestKubernetesUpgrade 384.46
261 TestMissingContainerUpgrade 165.84
273 TestStoppedBinaryUpgrade/Setup 0.89
274 TestStoppedBinaryUpgrade/Upgrade 86.13
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.64
284 TestPause/serial/Start 74.24
286 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
287 TestNoKubernetes/serial/StartWithK8s 37.69
288 TestPause/serial/SecondStartNoReconfiguration 34.88
289 TestNoKubernetes/serial/StartWithStopK8s 17.65
290 TestPause/serial/Pause 0.61
291 TestPause/serial/VerifyStatus 0.33
292 TestPause/serial/Unpause 0.56
293 TestPause/serial/PauseAgain 0.8
294 TestPause/serial/DeletePaused 2.12
295 TestPause/serial/VerifyDeletedResources 15.32
296 TestNoKubernetes/serial/Start 7.22
297 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
298 TestNoKubernetes/serial/ProfileList 0.81
299 TestNoKubernetes/serial/Stop 1.37
300 TestNoKubernetes/serial/StartNoArgs 7.98
301 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
302 TestNetworkPlugins/group/auto/Start 44.79
303 TestNetworkPlugins/group/auto/KubeletFlags 0.3
304 TestNetworkPlugins/group/auto/NetCatPod 9.29
305 TestNetworkPlugins/group/auto/DNS 0.22
306 TestNetworkPlugins/group/auto/Localhost 0.17
307 TestNetworkPlugins/group/auto/HairPin 0.17
308 TestNetworkPlugins/group/flannel/Start 57.85
309 TestNetworkPlugins/group/calico/Start 69.86
310 TestNetworkPlugins/group/flannel/ControllerPod 6.01
311 TestNetworkPlugins/group/flannel/KubeletFlags 0.61
312 TestNetworkPlugins/group/flannel/NetCatPod 12.3
313 TestNetworkPlugins/group/flannel/DNS 0.31
314 TestNetworkPlugins/group/flannel/Localhost 0.21
315 TestNetworkPlugins/group/flannel/HairPin 0.23
316 TestNetworkPlugins/group/custom-flannel/Start 56.7
317 TestNetworkPlugins/group/calico/ControllerPod 6.01
318 TestNetworkPlugins/group/calico/KubeletFlags 0.35
319 TestNetworkPlugins/group/calico/NetCatPod 11.37
320 TestNetworkPlugins/group/calico/DNS 0.62
321 TestNetworkPlugins/group/calico/Localhost 0.32
322 TestNetworkPlugins/group/calico/HairPin 0.27
323 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
324 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.38
325 TestNetworkPlugins/group/false/Start 50.52
326 TestNetworkPlugins/group/custom-flannel/DNS 0.27
327 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
328 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
329 TestNetworkPlugins/group/kindnet/Start 63.31
330 TestNetworkPlugins/group/false/KubeletFlags 0.36
331 TestNetworkPlugins/group/false/NetCatPod 13.37
332 TestNetworkPlugins/group/false/DNS 0.2
333 TestNetworkPlugins/group/false/Localhost 0.22
334 TestNetworkPlugins/group/false/HairPin 0.25
335 TestNetworkPlugins/group/bridge/Start 77.55
336 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
337 TestNetworkPlugins/group/kindnet/KubeletFlags 0.35
338 TestNetworkPlugins/group/kindnet/NetCatPod 11.34
339 TestNetworkPlugins/group/kindnet/DNS 0.34
340 TestNetworkPlugins/group/kindnet/Localhost 0.33
341 TestNetworkPlugins/group/kindnet/HairPin 0.21
342 TestNetworkPlugins/group/enable-default-cni/Start 74.44
343 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
344 TestNetworkPlugins/group/bridge/NetCatPod 12.41
345 TestNetworkPlugins/group/bridge/DNS 0.2
346 TestNetworkPlugins/group/bridge/Localhost 0.16
347 TestNetworkPlugins/group/bridge/HairPin 0.18
348 TestNetworkPlugins/group/kubenet/Start 78.92
349 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
350 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.35
351 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
352 TestNetworkPlugins/group/enable-default-cni/Localhost 0.32
353 TestNetworkPlugins/group/enable-default-cni/HairPin 0.27
355 TestStartStop/group/old-k8s-version/serial/FirstStart 153.42
356 TestNetworkPlugins/group/kubenet/KubeletFlags 0.36
357 TestNetworkPlugins/group/kubenet/NetCatPod 10.35
358 TestNetworkPlugins/group/kubenet/DNS 0.21
359 TestNetworkPlugins/group/kubenet/Localhost 0.2
360 TestNetworkPlugins/group/kubenet/HairPin 0.24
362 TestStartStop/group/no-preload/serial/FirstStart 55.81
363 TestStartStop/group/no-preload/serial/DeployApp 9.39
364 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
365 TestStartStop/group/no-preload/serial/Stop 11.09
366 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
367 TestStartStop/group/no-preload/serial/SecondStart 291.45
368 TestStartStop/group/old-k8s-version/serial/DeployApp 9.69
369 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.31
370 TestStartStop/group/old-k8s-version/serial/Stop 11.19
371 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
372 TestStartStop/group/old-k8s-version/serial/SecondStart 131.32
373 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
374 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
376 TestStartStop/group/old-k8s-version/serial/Pause 2.77
378 TestStartStop/group/embed-certs/serial/FirstStart 45.33
379 TestStartStop/group/embed-certs/serial/DeployApp 8.39
380 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
381 TestStartStop/group/embed-certs/serial/Stop 11.04
382 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
383 TestStartStop/group/embed-certs/serial/SecondStart 268.11
384 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
385 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.21
386 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
387 TestStartStop/group/no-preload/serial/Pause 2.91
389 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 78.26
390 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.4
391 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
392 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.2
393 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
394 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.06
395 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
396 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.19
397 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
398 TestStartStop/group/embed-certs/serial/Pause 2.85
400 TestStartStop/group/newest-cni/serial/FirstStart 37.45
401 TestStartStop/group/newest-cni/serial/DeployApp 0
402 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.26
403 TestStartStop/group/newest-cni/serial/Stop 9.55
404 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
405 TestStartStop/group/newest-cni/serial/SecondStart 18.37
406 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
407 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
408 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
409 TestStartStop/group/newest-cni/serial/Pause 3.25
410 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
411 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
412 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
413 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.67
x
+
TestDownloadOnly/v1.20.0/json-events (13.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-385391 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-385391 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.447853625s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-385391
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-385391: exit status 85 (69.850786ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-385391 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |          |
	|         | -p download-only-385391        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:17
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:17.997674    7541 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:17.997821    7541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:17.997832    7541 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:17.997838    7541 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:17.998069    7541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	W0913 23:26:17.998206    7541 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19640-2224/.minikube/config/config.json: open /home/jenkins/minikube-integration/19640-2224/.minikube/config/config.json: no such file or directory
	I0913 23:26:17.998598    7541 out.go:352] Setting JSON to true
	I0913 23:26:17.999376    7541 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":526,"bootTime":1726269452,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 23:26:17.999455    7541 start.go:139] virtualization:  
	I0913 23:26:18.004739    7541 out.go:97] [download-only-385391] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0913 23:26:18.004991    7541 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball: no such file or directory
	I0913 23:26:18.005053    7541 notify.go:220] Checking for updates...
	I0913 23:26:18.012204    7541 out.go:169] MINIKUBE_LOCATION=19640
	I0913 23:26:18.014433    7541 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:18.016746    7541 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	I0913 23:26:18.018882    7541 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	I0913 23:26:18.020815    7541 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0913 23:26:18.024910    7541 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 23:26:18.025168    7541 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:26:18.053934    7541 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:26:18.054055    7541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:18.371245    7541 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 23:26:18.361672373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:26:18.371365    7541 docker.go:318] overlay module found
	I0913 23:26:18.373426    7541 out.go:97] Using the docker driver based on user configuration
	I0913 23:26:18.373458    7541 start.go:297] selected driver: docker
	I0913 23:26:18.373465    7541 start.go:901] validating driver "docker" against <nil>
	I0913 23:26:18.373565    7541 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:18.429303    7541 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 23:26:18.420557591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:26:18.429495    7541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:26:18.429798    7541 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0913 23:26:18.429994    7541 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 23:26:18.432562    7541 out.go:169] Using Docker driver with root privileges
	I0913 23:26:18.434594    7541 cni.go:84] Creating CNI manager for ""
	I0913 23:26:18.434668    7541 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0913 23:26:18.434747    7541 start.go:340] cluster config:
	{Name:download-only-385391 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-385391 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:26:18.436852    7541 out.go:97] Starting "download-only-385391" primary control-plane node in "download-only-385391" cluster
	I0913 23:26:18.436892    7541 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 23:26:18.438861    7541 out.go:97] Pulling base image v0.0.45-1726243947-19640 ...
	I0913 23:26:18.438886    7541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 23:26:18.439002    7541 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0913 23:26:18.454116    7541 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0913 23:26:18.454299    7541 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0913 23:26:18.454409    7541 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0913 23:26:18.521523    7541 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 23:26:18.521560    7541 cache.go:56] Caching tarball of preloaded images
	I0913 23:26:18.521725    7541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 23:26:18.524142    7541 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0913 23:26:18.524175    7541 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 23:26:18.612456    7541 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0913 23:26:22.596861    7541 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 23:26:22.596993    7541 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0913 23:26:23.637835    7541 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0913 23:26:23.638262    7541 profile.go:143] Saving config to /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/download-only-385391/config.json ...
	I0913 23:26:23.638299    7541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/download-only-385391/config.json: {Name:mk17581acae8b5817ba3b1ad24ae1229fb66ddb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0913 23:26:23.638475    7541 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0913 23:26:23.638658    7541 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19640-2224/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-385391 host does not exist
	  To start a cluster, run: "minikube start -p download-only-385391"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-385391
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.91s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-915155 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-915155 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.907207142s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.91s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-915155
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-915155: exit status 85 (67.813564ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-385391 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p download-only-385391        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| delete  | -p download-only-385391        | download-only-385391 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC | 13 Sep 24 23:26 UTC |
	| start   | -o=json --download-only        | download-only-915155 | jenkins | v1.34.0 | 13 Sep 24 23:26 UTC |                     |
	|         | -p download-only-915155        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/13 23:26:31
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0913 23:26:31.842624    7740 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:26:31.842745    7740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:31.842753    7740 out.go:358] Setting ErrFile to fd 2...
	I0913 23:26:31.842759    7740 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:26:31.843023    7740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	I0913 23:26:31.843430    7740 out.go:352] Setting JSON to true
	I0913 23:26:31.844167    7740 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":540,"bootTime":1726269452,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 23:26:31.844235    7740 start.go:139] virtualization:  
	I0913 23:26:31.847358    7740 out.go:97] [download-only-915155] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 23:26:31.847561    7740 notify.go:220] Checking for updates...
	I0913 23:26:31.850009    7740 out.go:169] MINIKUBE_LOCATION=19640
	I0913 23:26:31.852148    7740 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:26:31.854480    7740 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	I0913 23:26:31.856665    7740 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	I0913 23:26:31.858823    7740 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0913 23:26:31.862866    7740 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0913 23:26:31.863128    7740 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:26:31.893839    7740 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:26:31.893958    7740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:31.956708    7740 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 23:26:31.946930096 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:26:31.956818    7740 docker.go:318] overlay module found
	I0913 23:26:31.959698    7740 out.go:97] Using the docker driver based on user configuration
	I0913 23:26:31.959731    7740 start.go:297] selected driver: docker
	I0913 23:26:31.959739    7740 start.go:901] validating driver "docker" against <nil>
	I0913 23:26:31.959845    7740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:26:32.017068    7740 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-13 23:26:32.006797373 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:26:32.017271    7740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0913 23:26:32.017544    7740 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0913 23:26:32.017699    7740 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0913 23:26:32.020424    7740 out.go:169] Using Docker driver with root privileges
	I0913 23:26:32.022856    7740 cni.go:84] Creating CNI manager for ""
	I0913 23:26:32.022933    7740 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0913 23:26:32.022945    7740 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0913 23:26:32.023034    7740 start.go:340] cluster config:
	{Name:download-only-915155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-915155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:26:32.025820    7740 out.go:97] Starting "download-only-915155" primary control-plane node in "download-only-915155" cluster
	I0913 23:26:32.025855    7740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0913 23:26:32.028141    7740 out.go:97] Pulling base image v0.0.45-1726243947-19640 ...
	I0913 23:26:32.028174    7740 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:26:32.028329    7740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local docker daemon
	I0913 23:26:32.043506    7740 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 to local cache
	I0913 23:26:32.043677    7740 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory
	I0913 23:26:32.043697    7740 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 in local cache directory, skipping pull
	I0913 23:26:32.043702    7740 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 exists in cache, skipping pull
	I0913 23:26:32.043710    7740 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 as a tarball
	I0913 23:26:32.111412    7740 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 23:26:32.111448    7740 cache.go:56] Caching tarball of preloaded images
	I0913 23:26:32.111632    7740 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0913 23:26:32.114059    7740 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0913 23:26:32.114084    7740 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0913 23:26:32.200695    7740 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4?checksum=md5:402f69b5e09ccb1e1dbe401b4cdd104d -> /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0913 23:26:36.299503    7740 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	I0913 23:26:36.299604    7740 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19640-2224/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-915155 host does not exist
	  To start a cluster, run: "minikube start -p download-only-915155"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-915155
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-598186 --alsologtostderr --binary-mirror http://127.0.0.1:36811 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-598186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-598186
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (86.68s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-973915 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-973915 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m24.593432784s)
helpers_test.go:175: Cleaning up "offline-docker-973915" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-973915
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-973915: (2.087397048s)
--- PASS: TestOffline (86.68s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-467916
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-467916: exit status 85 (78.608787ms)

                                                
                                                
-- stdout --
	* Profile "addons-467916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-467916"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-467916
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-467916: exit status 85 (71.250058ms)

                                                
                                                
-- stdout --
	* Profile "addons-467916" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-467916"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (223.14s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-467916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-467916 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m43.136765986s)
--- PASS: TestAddons/Setup (223.14s)

                                                
                                    
x
+
TestAddons/serial/Volcano (41.07s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 58.693412ms
addons_test.go:905: volcano-admission stabilized in 59.758288ms
addons_test.go:897: volcano-scheduler stabilized in 59.809693ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-bqxmx" [bfacc2b4-6fd5-4ee0-bc0d-f229f61de7fb] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003337658s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-s5jw4" [60f03ab8-224c-4533-9995-0c1b4bb88aee] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003630777s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-fwdx2" [81922e7a-e839-4439-a7fb-79f7f0798b08] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003203057s
addons_test.go:932: (dbg) Run:  kubectl --context addons-467916 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-467916 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-467916 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [a3030f1d-e0ff-4ba0-9378-4feb20320472] Pending
helpers_test.go:344: "test-job-nginx-0" [a3030f1d-e0ff-4ba0-9378-4feb20320472] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [a3030f1d-e0ff-4ba0-9378-4feb20320472] Running
addons_test.go:964: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.004498262s
addons_test.go:968: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable volcano --alsologtostderr -v=1
addons_test.go:968: (dbg) Done: out/minikube-linux-arm64 -p addons-467916 addons disable volcano --alsologtostderr -v=1: (10.409641072s)
--- PASS: TestAddons/serial/Volcano (41.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-467916 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-467916 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.41s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-467916 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-467916 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-467916 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [f9671b1f-1b54-474b-8733-4a9635c04d00] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [f9671b1f-1b54-474b-8733-4a9635c04d00] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004310449s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-467916 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-467916 addons disable ingress-dns --alsologtostderr -v=1: (1.710439846s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-467916 addons disable ingress --alsologtostderr -v=1: (7.77289742s)
--- PASS: TestAddons/parallel/Ingress (20.41s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pfpfw" [f1a9d96c-1db0-4cd8-90ec-9e7b378ddf3c] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0100152s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-467916
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-467916: (5.838141129s)
--- PASS: TestAddons/parallel/InspektorGadget (10.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.877366ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-cx9cr" [c35bd9f6-56a5-41e8-a831-926ba6fe5266] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.004301232s
addons_test.go:417: (dbg) Run:  kubectl --context addons-467916 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.68s)

                                                
                                    
x
+
TestAddons/parallel/CSI (63.88s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.983597ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-467916 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-467916 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [915ec74a-a792-4551-a5a5-e04dc72187d6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [915ec74a-a792-4551-a5a5-e04dc72187d6] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004262871s
addons_test.go:590: (dbg) Run:  kubectl --context addons-467916 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-467916 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-467916 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-467916 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-467916 delete pod task-pv-pod: (1.388482793s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-467916 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-467916 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-467916 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9b63f526-bb83-40ef-a825-f187b1ddb4a1] Pending
helpers_test.go:344: "task-pv-pod-restore" [9b63f526-bb83-40ef-a825-f187b1ddb4a1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9b63f526-bb83-40ef-a825-f187b1ddb4a1] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.003669768s
addons_test.go:632: (dbg) Run:  kubectl --context addons-467916 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-467916 delete pod task-pv-pod-restore: (1.163441644s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-467916 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-467916 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-467916 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.657574731s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (63.88s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-467916 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-7ghjt" [1aca4027-5ea1-4df7-9ad5-f5ed0787941c] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-7ghjt" [1aca4027-5ea1-4df7-9ad5-f5ed0787941c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-7ghjt" [1aca4027-5ea1-4df7-9ad5-f5ed0787941c] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003747139s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-467916 addons disable headlamp --alsologtostderr -v=1: (5.761429037s)
--- PASS: TestAddons/parallel/Headlamp (17.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-k6w6l" [d0b6fbec-e03d-43ef-9219-63a0c9938da4] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.015456646s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-467916
--- PASS: TestAddons/parallel/CloudSpanner (5.64s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-467916 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-467916 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467916 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [8f5e63c5-dcf3-4252-903b-7c94a595353b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [8f5e63c5-dcf3-4252-903b-7c94a595353b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [8f5e63c5-dcf3-4252-903b-7c94a595353b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.00333096s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-467916 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 ssh "cat /opt/local-path-provisioner/pvc-57a7b523-7db8-4825-ad64-698dbbbd6c68_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-467916 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-467916 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-467916 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.431892001s)
--- PASS: TestAddons/parallel/LocalPath (53.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-4zhpp" [425cd5c4-637d-474a-884b-c509d13eb5e5] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0036097s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-467916
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-4jv94" [828ea539-ba1a-49e9-b38b-2fa7d15d9595] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005149112s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-467916 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-467916 addons disable yakd --alsologtostderr -v=1: (5.921581949s)
--- PASS: TestAddons/parallel/Yakd (11.93s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6.11s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-467916
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-467916: (5.844413491s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-467916
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-467916
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-467916
--- PASS: TestAddons/StoppedEnableDisable (6.11s)

                                                
                                    
x
+
TestCertOptions (34.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-156474 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-156474 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (31.918207969s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-156474 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-156474 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-156474 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-156474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-156474
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-156474: (2.025710905s)
--- PASS: TestCertOptions (34.59s)

                                                
                                    
x
+
TestCertExpiration (250.76s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-121215 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-121215 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (37.849600494s)
E0914 00:25:04.082575    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-121215 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-121215 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (30.355738316s)
helpers_test.go:175: Cleaning up "cert-expiration-121215" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-121215
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-121215: (2.554655628s)
--- PASS: TestCertExpiration (250.76s)

                                                
                                    
x
+
TestDockerFlags (35.34s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-533983 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0914 00:25:22.711376    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:25:31.797885    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-533983 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.539864545s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-533983 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-533983 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-533983" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-533983
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-533983: (2.167954664s)
--- PASS: TestDockerFlags (35.34s)

                                                
                                    
x
+
TestForceSystemdFlag (41.57s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-517616 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0914 00:24:45.146613    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-517616 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (39.150419766s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-517616 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-517616" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-517616
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-517616: (2.077384037s)
--- PASS: TestForceSystemdFlag (41.57s)

                                                
                                    
x
+
TestForceSystemdEnv (45.81s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-548675 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-548675 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.212923528s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-548675 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-548675" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-548675
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-548675: (2.139951382s)
--- PASS: TestForceSystemdEnv (45.81s)

                                                
                                    
x
+
TestErrorSpam/setup (34.09s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-597171 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-597171 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-597171 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-597171 --driver=docker  --container-runtime=docker: (34.093746026s)
--- PASS: TestErrorSpam/setup (34.09s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (0.96s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 status
--- PASS: TestErrorSpam/status (0.96s)

                                                
                                    
x
+
TestErrorSpam/pause (1.33s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 pause
--- PASS: TestErrorSpam/pause (1.33s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 unpause
--- PASS: TestErrorSpam/unpause (1.46s)

                                                
                                    
x
+
TestErrorSpam/stop (10.95s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 stop: (10.756551593s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-597171 --log_dir /tmp/nospam-597171 stop
--- PASS: TestErrorSpam/stop (10.95s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19640-2224/.minikube/files/etc/test/nested/copy/7536/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (43.61s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-657116 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-657116 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (43.606193336s)
--- PASS: TestFunctional/serial/StartWithProxy (43.61s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.58s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-657116 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-657116 --alsologtostderr -v=8: (37.577797977s)
functional_test.go:663: soft start took 37.58232536s for "functional-657116" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.58s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-657116 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-657116 cache add registry.k8s.io/pause:3.1: (1.205275377s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-657116 cache add registry.k8s.io/pause:3.3: (1.219797607s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-657116 cache add registry.k8s.io/pause:latest: (1.034937617s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-657116 /tmp/TestFunctionalserialCacheCmdcacheadd_local149224233/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cache add minikube-local-cache-test:functional-657116
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cache delete minikube-local-cache-test:functional-657116
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-657116
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (281.026993ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.63s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 kubectl -- --context functional-657116 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-657116 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.17s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-657116 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-657116 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.170526457s)
functional_test.go:761: restart took 39.170626313s for "functional-657116" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (39.17s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-657116 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-657116 logs: (1.163559555s)
--- PASS: TestFunctional/serial/LogsCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 logs --file /tmp/TestFunctionalserialLogsFileCmd3646569305/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-657116 logs --file /tmp/TestFunctionalserialLogsFileCmd3646569305/001/logs.txt: (1.239887477s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.59s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-657116 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-657116
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-657116: exit status 115 (485.221919ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30357 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-657116 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.59s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 config get cpus: exit status 14 (83.993099ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 config get cpus: exit status 14 (74.479546ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-657116 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-657116 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 48437: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.70s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-657116 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-657116 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (172.776008ms)

                                                
                                                
-- stdout --
	* [functional-657116] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:45:17.643797   48042 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:45:17.644014   48042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:45:17.644028   48042 out.go:358] Setting ErrFile to fd 2...
	I0913 23:45:17.644034   48042 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:45:17.644394   48042 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	I0913 23:45:17.644797   48042 out.go:352] Setting JSON to false
	I0913 23:45:17.645982   48042 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1666,"bootTime":1726269452,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 23:45:17.646056   48042 start.go:139] virtualization:  
	I0913 23:45:17.649494   48042 out.go:177] * [functional-657116] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0913 23:45:17.652625   48042 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:45:17.652706   48042 notify.go:220] Checking for updates...
	I0913 23:45:17.657024   48042 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:45:17.658806   48042 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	I0913 23:45:17.660818   48042 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	I0913 23:45:17.662715   48042 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 23:45:17.664330   48042 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:45:17.666690   48042 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:45:17.667267   48042 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:45:17.692165   48042 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:45:17.692323   48042 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:45:17.749856   48042 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 23:45:17.739410224 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:45:17.750024   48042 docker.go:318] overlay module found
	I0913 23:45:17.754571   48042 out.go:177] * Using the docker driver based on existing profile
	I0913 23:45:17.757024   48042 start.go:297] selected driver: docker
	I0913 23:45:17.757045   48042 start.go:901] validating driver "docker" against &{Name:functional-657116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-657116 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:45:17.757163   48042 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:45:17.760192   48042 out.go:201] 
	W0913 23:45:17.762279   48042 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0913 23:45:17.764198   48042 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-657116 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-657116 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-657116 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (186.347429ms)

                                                
                                                
-- stdout --
	* [functional-657116] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:45:17.465165   47999 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:45:17.465346   47999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:45:17.465378   47999 out.go:358] Setting ErrFile to fd 2...
	I0913 23:45:17.465403   47999 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:45:17.466928   47999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	I0913 23:45:17.467511   47999 out.go:352] Setting JSON to false
	I0913 23:45:17.468649   47999 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1665,"bootTime":1726269452,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0913 23:45:17.468757   47999 start.go:139] virtualization:  
	I0913 23:45:17.471457   47999 out.go:177] * [functional-657116] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0913 23:45:17.473811   47999 notify.go:220] Checking for updates...
	I0913 23:45:17.476208   47999 out.go:177]   - MINIKUBE_LOCATION=19640
	I0913 23:45:17.478421   47999 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0913 23:45:17.480476   47999 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	I0913 23:45:17.483081   47999 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	I0913 23:45:17.485909   47999 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0913 23:45:17.488418   47999 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0913 23:45:17.491236   47999 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:45:17.491823   47999 driver.go:394] Setting default libvirt URI to qemu:///system
	I0913 23:45:17.517655   47999 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0913 23:45:17.517821   47999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:45:17.577601   47999 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-13 23:45:17.56802377 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:45:17.577719   47999 docker.go:318] overlay module found
	I0913 23:45:17.581572   47999 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0913 23:45:17.583482   47999 start.go:297] selected driver: docker
	I0913 23:45:17.583498   47999 start.go:901] validating driver "docker" against &{Name:functional-657116 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726243947-19640@sha256:bb1287c9c0ec51ba7d8272f0f8073d6e9758ad79ff87c787fdce1c3513743243 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-657116 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0913 23:45:17.583601   47999 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0913 23:45:17.586513   47999 out.go:201] 
	W0913 23:45:17.588500   47999 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0913 23:45:17.590483   47999 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-657116 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-657116 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-skgld" [7634e510-ee01-474d-ad66-107176e9b772] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-skgld" [7634e510-ee01-474d-ad66-107176e9b772] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003009859s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30117
functional_test.go:1675: http://192.168.49.2:30117: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-skgld

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30117
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.67s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d697ea91-305f-4158-b1ca-3ab40044e9ba] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003932278s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-657116 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-657116 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-657116 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-657116 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [f25e94d8-5a2a-4047-9c16-922a3c2ea605] Pending
helpers_test.go:344: "sp-pod" [f25e94d8-5a2a-4047-9c16-922a3c2ea605] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [f25e94d8-5a2a-4047-9c16-922a3c2ea605] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.004034533s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-657116 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-657116 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-657116 delete -f testdata/storage-provisioner/pod.yaml: (1.370558177s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-657116 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2f1bae10-5a66-464e-a700-7fb67dda649f] Pending
helpers_test.go:344: "sp-pod" [2f1bae10-5a66-464e-a700-7fb67dda649f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2f1bae10-5a66-464e-a700-7fb67dda649f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003816734s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-657116 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.37s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh -n functional-657116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cp functional-657116:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd433267583/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh -n functional-657116 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh -n functional-657116 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7536/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo cat /etc/test/nested/copy/7536/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7536.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo cat /etc/ssl/certs/7536.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7536.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo cat /usr/share/ca-certificates/7536.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/75362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo cat /etc/ssl/certs/75362.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/75362.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo cat /usr/share/ca-certificates/75362.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-657116 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 ssh "sudo systemctl is-active crio": exit status 1 (312.137609ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-657116 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-657116 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-657116 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 45322: os: process already finished
helpers_test.go:502: unable to terminate pid 45137: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-657116 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-657116 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-657116 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [6e3d2fe5-37d3-443a-8603-628d1d73d265] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [6e3d2fe5-37d3-443a-8603-628d1d73d265] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004486333s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-657116 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.135.80 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-657116 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-657116 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-657116 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-wg2mn" [b03a6221-e5a9-43d8-abd8-60482b7047bd] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-wg2mn" [b03a6221-e5a9-43d8-abd8-60482b7047bd] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003923817s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "325.032128ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "53.637194ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "326.607996ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "58.105572ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdany-port906232467/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726271112221860240" to /tmp/TestFunctionalparallelMountCmdany-port906232467/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726271112221860240" to /tmp/TestFunctionalparallelMountCmdany-port906232467/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726271112221860240" to /tmp/TestFunctionalparallelMountCmdany-port906232467/001/test-1726271112221860240
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (304.302246ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 13 23:45 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 13 23:45 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 13 23:45 test-1726271112221860240
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh cat /mount-9p/test-1726271112221860240
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-657116 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4f82caaf-ecdf-402d-9537-ae62489eb5e0] Pending
helpers_test.go:344: "busybox-mount" [4f82caaf-ecdf-402d-9537-ae62489eb5e0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4f82caaf-ecdf-402d-9537-ae62489eb5e0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4f82caaf-ecdf-402d-9537-ae62489eb5e0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.007352753s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-657116 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdany-port906232467/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 service list -o json
functional_test.go:1494: Took "567.801545ms" to run "out/minikube-linux-arm64 -p functional-657116 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30363
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30363
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdspecific-port1776585513/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.934714ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdspecific-port1776585513/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 ssh "sudo umount -f /mount-9p": exit status 1 (338.193668ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-657116 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdspecific-port1776585513/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdVerifyCleanup444664108/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdVerifyCleanup444664108/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdVerifyCleanup444664108/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T" /mount1
E0913 23:45:22.712976    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:22.725445    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:22.737437    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:22.759966    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:22.802462    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:22.885886    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:23.050537    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:45:23.374420    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T" /mount1: exit status 1 (984.476202ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0913 23:45:24.016798    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh "findmnt -T" /mount3
E0913 23:45:25.298934    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-657116 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdVerifyCleanup444664108/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdVerifyCleanup444664108/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-657116 /tmp/TestFunctionalparallelMountCmdVerifyCleanup444664108/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.76s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-657116 version -o=json --components: (1.129734556s)
--- PASS: TestFunctional/parallel/Version/components (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-657116 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-657116
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-657116
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-657116 image ls --format short --alsologtostderr:
I0913 23:45:33.173573   51082 out.go:345] Setting OutFile to fd 1 ...
I0913 23:45:33.173815   51082 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:33.173843   51082 out.go:358] Setting ErrFile to fd 2...
I0913 23:45:33.173860   51082 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:33.174229   51082 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
I0913 23:45:33.174995   51082 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:33.175162   51082 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:33.175674   51082 cli_runner.go:164] Run: docker container inspect functional-657116 --format={{.State.Status}}
I0913 23:45:33.196735   51082 ssh_runner.go:195] Run: systemctl --version
I0913 23:45:33.196798   51082 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657116
I0913 23:45:33.220843   51082 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/functional-657116/id_rsa Username:docker}
I0913 23:45:33.308987   51082 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-657116 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-657116 | c63ba8a1c4aed | 30B    |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| docker.io/library/nginx                     | latest            | 195245f0c7927 | 193MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kicbase/echo-server               | functional-657116 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-657116 image ls --format table --alsologtostderr:
I0913 23:45:34.148597   51357 out.go:345] Setting OutFile to fd 1 ...
I0913 23:45:34.148825   51357 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:34.148853   51357 out.go:358] Setting ErrFile to fd 2...
I0913 23:45:34.148873   51357 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:34.149165   51357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
I0913 23:45:34.149840   51357 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:34.150025   51357 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:34.150791   51357 cli_runner.go:164] Run: docker container inspect functional-657116 --format={{.State.Status}}
I0913 23:45:34.187557   51357 ssh_runner.go:195] Run: systemctl --version
I0913 23:45:34.187605   51357 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657116
I0913 23:45:34.205543   51357 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/functional-657116/id_rsa Username:docker}
I0913 23:45:34.293035   51357 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-657116 image ls --format json --alsologtostderr:
[{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-657116"],"size":"4780000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"94700000"},{"id":"195245f0c79279e8b8e012efa02c91dad4cf7d0e44c
0f4382fea68cd93088e6c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"c63ba8a1c4aedb09cc96b216c7fdb7e4cb66c8364b2de83d8b354e606e1f3303","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-657116"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["r
egistry.k8s.io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"si
ze":"85000000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-657116 image ls --format json --alsologtostderr:
I0913 23:45:33.904233   51256 out.go:345] Setting OutFile to fd 1 ...
I0913 23:45:33.904405   51256 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:33.904415   51256 out.go:358] Setting ErrFile to fd 2...
I0913 23:45:33.904421   51256 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:33.904799   51256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
I0913 23:45:33.905777   51256 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:33.905909   51256 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:33.906614   51256 cli_runner.go:164] Run: docker container inspect functional-657116 --format={{.State.Status}}
I0913 23:45:33.930061   51256 ssh_runner.go:195] Run: systemctl --version
I0913 23:45:33.930116   51256 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657116
I0913 23:45:33.950985   51256 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/functional-657116/id_rsa Username:docker}
I0913 23:45:34.053756   51256 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-657116 image ls --format yaml --alsologtostderr:
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "42300000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c63ba8a1c4aedb09cc96b216c7fdb7e4cb66c8364b2de83d8b354e606e1f3303
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-657116
size: "30"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-657116
size: "4780000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-657116 image ls --format yaml --alsologtostderr:
I0913 23:45:33.664867   51195 out.go:345] Setting OutFile to fd 1 ...
I0913 23:45:33.665088   51195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:33.665118   51195 out.go:358] Setting ErrFile to fd 2...
I0913 23:45:33.665138   51195 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:33.665393   51195 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
I0913 23:45:33.666074   51195 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:33.666237   51195 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:33.666766   51195 cli_runner.go:164] Run: docker container inspect functional-657116 --format={{.State.Status}}
I0913 23:45:33.702047   51195 ssh_runner.go:195] Run: systemctl --version
I0913 23:45:33.702100   51195 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657116
I0913 23:45:33.721474   51195 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/functional-657116/id_rsa Username:docker}
I0913 23:45:33.808849   51195 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-657116 ssh pgrep buildkitd: exit status 1 (285.412431ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image build -t localhost/my-image:functional-657116 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-657116 image build -t localhost/my-image:functional-657116 testdata/build --alsologtostderr: (2.794419505s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-657116 image build -t localhost/my-image:functional-657116 testdata/build --alsologtostderr:
I0913 23:45:33.694422   51201 out.go:345] Setting OutFile to fd 1 ...
I0913 23:45:33.694707   51201 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:33.694720   51201 out.go:358] Setting ErrFile to fd 2...
I0913 23:45:33.694726   51201 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0913 23:45:33.695222   51201 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
I0913 23:45:33.696666   51201 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:33.699038   51201 config.go:182] Loaded profile config "functional-657116": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0913 23:45:33.699607   51201 cli_runner.go:164] Run: docker container inspect functional-657116 --format={{.State.Status}}
I0913 23:45:33.719874   51201 ssh_runner.go:195] Run: systemctl --version
I0913 23:45:33.719949   51201 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-657116
I0913 23:45:33.745612   51201 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32779 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/functional-657116/id_rsa Username:docker}
I0913 23:45:33.835457   51201 build_images.go:161] Building image from path: /tmp/build.1933086686.tar
I0913 23:45:33.835522   51201 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0913 23:45:33.852805   51201 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1933086686.tar
I0913 23:45:33.867158   51201 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1933086686.tar: stat -c "%s %y" /var/lib/minikube/build/build.1933086686.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1933086686.tar': No such file or directory
I0913 23:45:33.867188   51201 ssh_runner.go:362] scp /tmp/build.1933086686.tar --> /var/lib/minikube/build/build.1933086686.tar (3072 bytes)
I0913 23:45:33.901610   51201 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1933086686
I0913 23:45:33.911281   51201 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1933086686 -xf /var/lib/minikube/build/build.1933086686.tar
I0913 23:45:33.922168   51201 docker.go:360] Building image: /var/lib/minikube/build/build.1933086686
I0913 23:45:33.922244   51201 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-657116 /var/lib/minikube/build/build.1933086686
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:5394db521b1003b7cac0cdcb75d2f67cc46269712270d8dd2f38e970afaa8b41 done
#8 naming to localhost/my-image:functional-657116 done
#8 DONE 0.1s
I0913 23:45:36.395022   51201 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-657116 /var/lib/minikube/build/build.1933086686: (2.472756249s)
I0913 23:45:36.395084   51201 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1933086686
I0913 23:45:36.404447   51201 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1933086686.tar
I0913 23:45:36.413141   51201 build_images.go:217] Built localhost/my-image:functional-657116 from /tmp/build.1933086686.tar
I0913 23:45:36.413170   51201 build_images.go:133] succeeded building to: functional-657116
I0913 23:45:36.413176   51201 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-657116
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image load --daemon kicbase/echo-server:functional-657116 --alsologtostderr
E0913 23:45:27.860837    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image load --daemon kicbase/echo-server:functional-657116 --alsologtostderr
2024/09/13 23:45:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 update-context --alsologtostderr -v=2
E0913 23:45:32.983640    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-657116 docker-env) && out/minikube-linux-arm64 status -p functional-657116"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-657116 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-657116
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image load --daemon kicbase/echo-server:functional-657116 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image save kicbase/echo-server:functional-657116 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image rm kicbase/echo-server:functional-657116 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-657116
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-657116 image save --daemon kicbase/echo-server:functional-657116 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-657116
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-657116
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-657116
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-657116
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (122.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-298075 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0913 23:45:43.224976    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:46:03.706518    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:46:44.668457    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-298075 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m1.602583907s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (122.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-298075 -- rollout status deployment/busybox: (4.537749629s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-96wpq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-mdpkb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-t7s6h -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-96wpq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-mdpkb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-t7s6h -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-96wpq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-mdpkb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-t7s6h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-96wpq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-96wpq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-mdpkb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-mdpkb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-t7s6h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-298075 -- exec busybox-7dff88458-t7s6h -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (25.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-298075 -v=7 --alsologtostderr
E0913 23:48:06.590424    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-298075 -v=7 --alsologtostderr: (24.253298341s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr: (1.009361467s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (25.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-298075 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-298075 status --output json -v=7 --alsologtostderr: (1.020862053s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp testdata/cp-test.txt ha-298075:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3525127915/001/cp-test_ha-298075.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075:/home/docker/cp-test.txt ha-298075-m02:/home/docker/cp-test_ha-298075_ha-298075-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m02 "sudo cat /home/docker/cp-test_ha-298075_ha-298075-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075:/home/docker/cp-test.txt ha-298075-m03:/home/docker/cp-test_ha-298075_ha-298075-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m03 "sudo cat /home/docker/cp-test_ha-298075_ha-298075-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075:/home/docker/cp-test.txt ha-298075-m04:/home/docker/cp-test_ha-298075_ha-298075-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m04 "sudo cat /home/docker/cp-test_ha-298075_ha-298075-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp testdata/cp-test.txt ha-298075-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3525127915/001/cp-test_ha-298075-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m02:/home/docker/cp-test.txt ha-298075:/home/docker/cp-test_ha-298075-m02_ha-298075.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075 "sudo cat /home/docker/cp-test_ha-298075-m02_ha-298075.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m02:/home/docker/cp-test.txt ha-298075-m03:/home/docker/cp-test_ha-298075-m02_ha-298075-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m03 "sudo cat /home/docker/cp-test_ha-298075-m02_ha-298075-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m02:/home/docker/cp-test.txt ha-298075-m04:/home/docker/cp-test_ha-298075-m02_ha-298075-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m04 "sudo cat /home/docker/cp-test_ha-298075-m02_ha-298075-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp testdata/cp-test.txt ha-298075-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3525127915/001/cp-test_ha-298075-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m03:/home/docker/cp-test.txt ha-298075:/home/docker/cp-test_ha-298075-m03_ha-298075.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075 "sudo cat /home/docker/cp-test_ha-298075-m03_ha-298075.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m03:/home/docker/cp-test.txt ha-298075-m02:/home/docker/cp-test_ha-298075-m03_ha-298075-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m02 "sudo cat /home/docker/cp-test_ha-298075-m03_ha-298075-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m03:/home/docker/cp-test.txt ha-298075-m04:/home/docker/cp-test_ha-298075-m03_ha-298075-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m04 "sudo cat /home/docker/cp-test_ha-298075-m03_ha-298075-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp testdata/cp-test.txt ha-298075-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3525127915/001/cp-test_ha-298075-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m04:/home/docker/cp-test.txt ha-298075:/home/docker/cp-test_ha-298075-m04_ha-298075.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075 "sudo cat /home/docker/cp-test_ha-298075-m04_ha-298075.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m04:/home/docker/cp-test.txt ha-298075-m02:/home/docker/cp-test_ha-298075-m04_ha-298075-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m02 "sudo cat /home/docker/cp-test_ha-298075-m04_ha-298075-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 cp ha-298075-m04:/home/docker/cp-test.txt ha-298075-m03:/home/docker/cp-test_ha-298075-m04_ha-298075-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 ssh -n ha-298075-m03 "sudo cat /home/docker/cp-test_ha-298075-m04_ha-298075-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-298075 node stop m02 -v=7 --alsologtostderr: (11.034247664s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr: exit status 7 (770.204381ms)

                                                
                                                
-- stdout --
	ha-298075
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-298075-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-298075-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-298075-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:48:47.468259   73520 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:48:47.472021   73520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:48:47.472070   73520 out.go:358] Setting ErrFile to fd 2...
	I0913 23:48:47.472120   73520 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:48:47.472516   73520 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	I0913 23:48:47.472764   73520 out.go:352] Setting JSON to false
	I0913 23:48:47.472823   73520 mustload.go:65] Loading cluster: ha-298075
	I0913 23:48:47.472862   73520 notify.go:220] Checking for updates...
	I0913 23:48:47.473405   73520 config.go:182] Loaded profile config "ha-298075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:48:47.473438   73520 status.go:255] checking status of ha-298075 ...
	I0913 23:48:47.474193   73520 cli_runner.go:164] Run: docker container inspect ha-298075 --format={{.State.Status}}
	I0913 23:48:47.495012   73520 status.go:330] ha-298075 host status = "Running" (err=<nil>)
	I0913 23:48:47.495034   73520 host.go:66] Checking if "ha-298075" exists ...
	I0913 23:48:47.495405   73520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-298075
	I0913 23:48:47.527468   73520 host.go:66] Checking if "ha-298075" exists ...
	I0913 23:48:47.527800   73520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:48:47.527856   73520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-298075
	I0913 23:48:47.560688   73520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32785 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/ha-298075/id_rsa Username:docker}
	I0913 23:48:47.649839   73520 ssh_runner.go:195] Run: systemctl --version
	I0913 23:48:47.654571   73520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:48:47.667668   73520 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0913 23:48:47.730046   73520 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-13 23:48:47.719799221 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0913 23:48:47.730651   73520 kubeconfig.go:125] found "ha-298075" server: "https://192.168.49.254:8443"
	I0913 23:48:47.730685   73520 api_server.go:166] Checking apiserver status ...
	I0913 23:48:47.730734   73520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:48:47.743585   73520 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2241/cgroup
	I0913 23:48:47.753680   73520 api_server.go:182] apiserver freezer: "6:freezer:/docker/03014e36cbb510d617afae2f119ba4065ecd344ad71ee474fa30965ccaa90232/kubepods/burstable/pod9da70495fe158e16479ebbd8f490727b/f28a9e77b03ebcac2c10d963c65ebcdf1fcdf11bdb208d17300cfb2dafea0d49"
	I0913 23:48:47.753780   73520 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/03014e36cbb510d617afae2f119ba4065ecd344ad71ee474fa30965ccaa90232/kubepods/burstable/pod9da70495fe158e16479ebbd8f490727b/f28a9e77b03ebcac2c10d963c65ebcdf1fcdf11bdb208d17300cfb2dafea0d49/freezer.state
	I0913 23:48:47.762483   73520 api_server.go:204] freezer state: "THAWED"
	I0913 23:48:47.762523   73520 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0913 23:48:47.770474   73520 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0913 23:48:47.770498   73520 status.go:422] ha-298075 apiserver status = Running (err=<nil>)
	I0913 23:48:47.770509   73520 status.go:257] ha-298075 status: &{Name:ha-298075 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:48:47.770525   73520 status.go:255] checking status of ha-298075-m02 ...
	I0913 23:48:47.770865   73520 cli_runner.go:164] Run: docker container inspect ha-298075-m02 --format={{.State.Status}}
	I0913 23:48:47.788691   73520 status.go:330] ha-298075-m02 host status = "Stopped" (err=<nil>)
	I0913 23:48:47.788713   73520 status.go:343] host is not running, skipping remaining checks
	I0913 23:48:47.788720   73520 status.go:257] ha-298075-m02 status: &{Name:ha-298075-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:48:47.788741   73520 status.go:255] checking status of ha-298075-m03 ...
	I0913 23:48:47.789067   73520 cli_runner.go:164] Run: docker container inspect ha-298075-m03 --format={{.State.Status}}
	I0913 23:48:47.805870   73520 status.go:330] ha-298075-m03 host status = "Running" (err=<nil>)
	I0913 23:48:47.805898   73520 host.go:66] Checking if "ha-298075-m03" exists ...
	I0913 23:48:47.806201   73520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-298075-m03
	I0913 23:48:47.822956   73520 host.go:66] Checking if "ha-298075-m03" exists ...
	I0913 23:48:47.823286   73520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:48:47.823334   73520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-298075-m03
	I0913 23:48:47.851915   73520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32796 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/ha-298075-m03/id_rsa Username:docker}
	I0913 23:48:47.945885   73520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:48:47.959987   73520 kubeconfig.go:125] found "ha-298075" server: "https://192.168.49.254:8443"
	I0913 23:48:47.960059   73520 api_server.go:166] Checking apiserver status ...
	I0913 23:48:47.960181   73520 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0913 23:48:47.972385   73520 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2197/cgroup
	I0913 23:48:47.992130   73520 api_server.go:182] apiserver freezer: "6:freezer:/docker/4a7da5dd1850f6990c0edaf893a342a0370798e19895543ac6d86e93f1c8e863/kubepods/burstable/pode6ee0cbf48abe8eb6426835cf252d40f/f3395924c0d837d1c63c746e5b36f8b312748b260104c6fdd6df77f34b49d4c9"
	I0913 23:48:47.992206   73520 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4a7da5dd1850f6990c0edaf893a342a0370798e19895543ac6d86e93f1c8e863/kubepods/burstable/pode6ee0cbf48abe8eb6426835cf252d40f/f3395924c0d837d1c63c746e5b36f8b312748b260104c6fdd6df77f34b49d4c9/freezer.state
	I0913 23:48:48.011238   73520 api_server.go:204] freezer state: "THAWED"
	I0913 23:48:48.011267   73520 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0913 23:48:48.019513   73520 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0913 23:48:48.019543   73520 status.go:422] ha-298075-m03 apiserver status = Running (err=<nil>)
	I0913 23:48:48.019554   73520 status.go:257] ha-298075-m03 status: &{Name:ha-298075-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:48:48.019571   73520 status.go:255] checking status of ha-298075-m04 ...
	I0913 23:48:48.019931   73520 cli_runner.go:164] Run: docker container inspect ha-298075-m04 --format={{.State.Status}}
	I0913 23:48:48.040535   73520 status.go:330] ha-298075-m04 host status = "Running" (err=<nil>)
	I0913 23:48:48.040560   73520 host.go:66] Checking if "ha-298075-m04" exists ...
	I0913 23:48:48.040856   73520 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-298075-m04
	I0913 23:48:48.069240   73520 host.go:66] Checking if "ha-298075-m04" exists ...
	I0913 23:48:48.069576   73520 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0913 23:48:48.069622   73520 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-298075-m04
	I0913 23:48:48.089340   73520 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32801 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/ha-298075-m04/id_rsa Username:docker}
	I0913 23:48:48.178621   73520 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0913 23:48:48.191915   73520 status.go:257] ha-298075-m04 status: &{Name:ha-298075-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-298075 node start m02 -v=7 --alsologtostderr: (39.74584104s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr: (1.35068602s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (4.286747864s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (4.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (172.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-298075 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-298075 -v=7 --alsologtostderr
E0913 23:49:45.145831    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:45.152357    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:45.170330    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:45.201509    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:45.243913    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:45.325633    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:45.487003    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:45.808684    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:46.450446    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:47.736406    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:50.299110    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:49:55.420933    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:50:05.662845    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-298075 -v=7 --alsologtostderr: (34.22055895s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-298075 --wait=true -v=7 --alsologtostderr
E0913 23:50:22.711481    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:50:26.144400    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:50:50.432347    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:51:07.106178    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-298075 --wait=true -v=7 --alsologtostderr: (2m17.789641237s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-298075
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (172.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 node delete m03 -v=7 --alsologtostderr
E0913 23:52:29.028731    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-298075 node delete m03 -v=7 --alsologtostderr: (10.568200548s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-298075 stop -v=7 --alsologtostderr: (32.84311973s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr: exit status 7 (111.24355ms)

                                                
                                                
-- stdout --
	ha-298075
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-298075-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-298075-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0913 23:53:11.391351   99619 out.go:345] Setting OutFile to fd 1 ...
	I0913 23:53:11.391514   99619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:53:11.391526   99619 out.go:358] Setting ErrFile to fd 2...
	I0913 23:53:11.391531   99619 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0913 23:53:11.391779   99619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	I0913 23:53:11.391967   99619 out.go:352] Setting JSON to false
	I0913 23:53:11.392004   99619 mustload.go:65] Loading cluster: ha-298075
	I0913 23:53:11.392082   99619 notify.go:220] Checking for updates...
	I0913 23:53:11.393095   99619 config.go:182] Loaded profile config "ha-298075": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0913 23:53:11.393121   99619 status.go:255] checking status of ha-298075 ...
	I0913 23:53:11.393657   99619 cli_runner.go:164] Run: docker container inspect ha-298075 --format={{.State.Status}}
	I0913 23:53:11.410978   99619 status.go:330] ha-298075 host status = "Stopped" (err=<nil>)
	I0913 23:53:11.411005   99619 status.go:343] host is not running, skipping remaining checks
	I0913 23:53:11.411013   99619 status.go:257] ha-298075 status: &{Name:ha-298075 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:53:11.411047   99619 status.go:255] checking status of ha-298075-m02 ...
	I0913 23:53:11.411377   99619 cli_runner.go:164] Run: docker container inspect ha-298075-m02 --format={{.State.Status}}
	I0913 23:53:11.429898   99619 status.go:330] ha-298075-m02 host status = "Stopped" (err=<nil>)
	I0913 23:53:11.429920   99619 status.go:343] host is not running, skipping remaining checks
	I0913 23:53:11.429927   99619 status.go:257] ha-298075-m02 status: &{Name:ha-298075-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0913 23:53:11.429946   99619 status.go:255] checking status of ha-298075-m04 ...
	I0913 23:53:11.430229   99619 cli_runner.go:164] Run: docker container inspect ha-298075-m04 --format={{.State.Status}}
	I0913 23:53:11.456087   99619 status.go:330] ha-298075-m04 host status = "Stopped" (err=<nil>)
	I0913 23:53:11.456119   99619 status.go:343] host is not running, skipping remaining checks
	I0913 23:53:11.456127   99619 status.go:257] ha-298075-m04 status: &{Name:ha-298075-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (154.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-298075 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0913 23:54:45.146859    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:55:12.870139    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0913 23:55:22.711232    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-298075 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m33.649267067s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (154.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-298075 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-298075 --control-plane -v=7 --alsologtostderr: (44.73347186s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-298075 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.82s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (31.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-819974 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-819974 --driver=docker  --container-runtime=docker: (31.866506907s)
--- PASS: TestImageBuild/serial/Setup (31.87s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.74s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-819974
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-819974: (1.739853826s)
--- PASS: TestImageBuild/serial/NormalBuild (1.74s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-819974
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.94s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-819974
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.77s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-819974
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.92s)

                                                
                                    
x
+
TestJSONOutput/start/Command (42.31s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-599759 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-599759 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (42.303353902s)
--- PASS: TestJSONOutput/start/Command (42.31s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.64s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-599759 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.64s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-599759 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.75s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-599759 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-599759 --output=json --user=testUser: (5.753173058s)
--- PASS: TestJSONOutput/stop/Command (5.75s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-486070 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-486070 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (84.101644ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4e3b27fd-6283-4fa7-b535-37d4ee24e48a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-486070] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d02b402c-2112-4015-9a2d-737d0f406763","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"88161422-662e-4f0e-ac23-8df2a65de337","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4f05260d-fb70-4d6f-9502-f441291b5c47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig"}}
	{"specversion":"1.0","id":"ce577a10-a721-4776-be8e-f4ba781e1f75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube"}}
	{"specversion":"1.0","id":"4cf9b057-9134-438b-ac25-d00a16032049","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"7297e712-bc7e-4f4d-9e59-7656db969d35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dc8550ce-1533-44fa-b6c8-10fec4ef4262","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-486070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-486070
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (32.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-109080 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-109080 --network=: (30.463264163s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-109080" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-109080
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-109080: (1.733259822s)
--- PASS: TestKicCustomNetwork/create_custom_network (32.22s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.18s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-345473 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-345473 --network=bridge: (33.147502121s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-345473" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-345473
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-345473: (2.013529673s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.18s)

                                                
                                    
x
+
TestKicExistingNetwork (34.62s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-121440 --network=existing-network
E0913 23:59:45.147188    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-121440 --network=existing-network: (32.414457428s)
helpers_test.go:175: Cleaning up "existing-network-121440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-121440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-121440: (2.052891043s)
--- PASS: TestKicExistingNetwork (34.62s)

                                                
                                    
x
+
TestKicCustomSubnet (33.21s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-071996 --subnet=192.168.60.0/24
E0914 00:00:22.710748    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-071996 --subnet=192.168.60.0/24: (31.096464567s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-071996 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-071996" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-071996
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-071996: (2.080135667s)
--- PASS: TestKicCustomSubnet (33.21s)

                                                
                                    
x
+
TestKicStaticIP (34.07s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-559198 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-559198 --static-ip=192.168.200.200: (31.547486864s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-559198 ip
helpers_test.go:175: Cleaning up "static-ip-559198" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-559198
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-559198: (2.370090517s)
--- PASS: TestKicStaticIP (34.07s)

                                                
                                    
x
+
TestMainNoArgs (0.07s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.07s)

                                                
                                    
x
+
TestMinikubeProfile (73.68s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-334102 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-334102 --driver=docker  --container-runtime=docker: (34.206982163s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-337236 --driver=docker  --container-runtime=docker
E0914 00:01:45.793756    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-337236 --driver=docker  --container-runtime=docker: (33.909615833s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-334102
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-337236
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-337236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-337236
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-337236: (2.151774798s)
helpers_test.go:175: Cleaning up "first-334102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-334102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-334102: (2.196703984s)
--- PASS: TestMinikubeProfile (73.68s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-404966 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-404966 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.873240756s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-404966 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-406722 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-406722 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.006538889s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-406722 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-404966 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-404966 --alsologtostderr -v=5: (1.481320147s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-406722 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-406722
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-406722: (1.207841312s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.29s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-406722
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-406722: (7.287972455s)
--- PASS: TestMountStart/serial/RestartStopped (8.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-406722 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (86.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-241101 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-241101 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m26.334672346s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (86.90s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-241101 -- rollout status deployment/busybox: (4.831518082s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0914 00:04:45.146955    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-hcspz -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-xcpg7 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-hcspz -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-xcpg7 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-hcspz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-xcpg7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (39.00s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-hcspz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-hcspz -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-xcpg7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-241101 -- exec busybox-7dff88458-xcpg7 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-241101 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-241101 -v 3 --alsologtostderr: (16.615049841s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.46s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-241101 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.37s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp testdata/cp-test.txt multinode-241101:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1664034876/001/cp-test_multinode-241101.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101:/home/docker/cp-test.txt multinode-241101-m02:/home/docker/cp-test_multinode-241101_multinode-241101-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m02 "sudo cat /home/docker/cp-test_multinode-241101_multinode-241101-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101:/home/docker/cp-test.txt multinode-241101-m03:/home/docker/cp-test_multinode-241101_multinode-241101-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m03 "sudo cat /home/docker/cp-test_multinode-241101_multinode-241101-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp testdata/cp-test.txt multinode-241101-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1664034876/001/cp-test_multinode-241101-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101-m02:/home/docker/cp-test.txt multinode-241101:/home/docker/cp-test_multinode-241101-m02_multinode-241101.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101 "sudo cat /home/docker/cp-test_multinode-241101-m02_multinode-241101.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101-m02:/home/docker/cp-test.txt multinode-241101-m03:/home/docker/cp-test_multinode-241101-m02_multinode-241101-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m03 "sudo cat /home/docker/cp-test_multinode-241101-m02_multinode-241101-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp testdata/cp-test.txt multinode-241101-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1664034876/001/cp-test_multinode-241101-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101-m03:/home/docker/cp-test.txt multinode-241101:/home/docker/cp-test_multinode-241101-m03_multinode-241101.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101 "sudo cat /home/docker/cp-test_multinode-241101-m03_multinode-241101.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 cp multinode-241101-m03:/home/docker/cp-test.txt multinode-241101-m02:/home/docker/cp-test_multinode-241101-m03_multinode-241101-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 ssh -n multinode-241101-m02 "sudo cat /home/docker/cp-test_multinode-241101-m03_multinode-241101-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-241101 node stop m03: (1.221715168s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-241101 status: exit status 7 (515.371439ms)

                                                
                                                
-- stdout --
	multinode-241101
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-241101-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-241101-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-241101 status --alsologtostderr: exit status 7 (513.24926ms)

                                                
                                                
-- stdout --
	multinode-241101
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-241101-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-241101-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:05:21.165357  175349 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:05:21.165523  175349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:05:21.165550  175349 out.go:358] Setting ErrFile to fd 2...
	I0914 00:05:21.165571  175349 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:05:21.165821  175349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	I0914 00:05:21.166027  175349 out.go:352] Setting JSON to false
	I0914 00:05:21.166087  175349 mustload.go:65] Loading cluster: multinode-241101
	I0914 00:05:21.166151  175349 notify.go:220] Checking for updates...
	I0914 00:05:21.166578  175349 config.go:182] Loaded profile config "multinode-241101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 00:05:21.166613  175349 status.go:255] checking status of multinode-241101 ...
	I0914 00:05:21.167176  175349 cli_runner.go:164] Run: docker container inspect multinode-241101 --format={{.State.Status}}
	I0914 00:05:21.186882  175349 status.go:330] multinode-241101 host status = "Running" (err=<nil>)
	I0914 00:05:21.186906  175349 host.go:66] Checking if "multinode-241101" exists ...
	I0914 00:05:21.187215  175349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-241101
	I0914 00:05:21.219387  175349 host.go:66] Checking if "multinode-241101" exists ...
	I0914 00:05:21.219711  175349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:05:21.219757  175349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-241101
	I0914 00:05:21.245838  175349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/multinode-241101/id_rsa Username:docker}
	I0914 00:05:21.334476  175349 ssh_runner.go:195] Run: systemctl --version
	I0914 00:05:21.339058  175349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:05:21.350769  175349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0914 00:05:21.404566  175349 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-14 00:05:21.394519361 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214827008 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0914 00:05:21.405174  175349 kubeconfig.go:125] found "multinode-241101" server: "https://192.168.67.2:8443"
	I0914 00:05:21.405210  175349 api_server.go:166] Checking apiserver status ...
	I0914 00:05:21.405255  175349 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0914 00:05:21.417007  175349 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2170/cgroup
	I0914 00:05:21.426670  175349 api_server.go:182] apiserver freezer: "6:freezer:/docker/710009444ff1f362e42419ba2fed2bc347523c19e87d7c3fe8418ed8088ae7e9/kubepods/burstable/pod6bba2da0e0f96335699bc1b486f2b785/60875937cf1fdeab7c1bb774c395c039e93e0efc4ac91dcd4bb623d7dba222de"
	I0914 00:05:21.426741  175349 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/710009444ff1f362e42419ba2fed2bc347523c19e87d7c3fe8418ed8088ae7e9/kubepods/burstable/pod6bba2da0e0f96335699bc1b486f2b785/60875937cf1fdeab7c1bb774c395c039e93e0efc4ac91dcd4bb623d7dba222de/freezer.state
	I0914 00:05:21.435990  175349 api_server.go:204] freezer state: "THAWED"
	I0914 00:05:21.436022  175349 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0914 00:05:21.443710  175349 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0914 00:05:21.443739  175349 status.go:422] multinode-241101 apiserver status = Running (err=<nil>)
	I0914 00:05:21.443750  175349 status.go:257] multinode-241101 status: &{Name:multinode-241101 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:05:21.443768  175349 status.go:255] checking status of multinode-241101-m02 ...
	I0914 00:05:21.444122  175349 cli_runner.go:164] Run: docker container inspect multinode-241101-m02 --format={{.State.Status}}
	I0914 00:05:21.461453  175349 status.go:330] multinode-241101-m02 host status = "Running" (err=<nil>)
	I0914 00:05:21.461499  175349 host.go:66] Checking if "multinode-241101-m02" exists ...
	I0914 00:05:21.461818  175349 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-241101-m02
	I0914 00:05:21.480741  175349 host.go:66] Checking if "multinode-241101-m02" exists ...
	I0914 00:05:21.481175  175349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0914 00:05:21.481236  175349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-241101-m02
	I0914 00:05:21.498168  175349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32918 SSHKeyPath:/home/jenkins/minikube-integration/19640-2224/.minikube/machines/multinode-241101-m02/id_rsa Username:docker}
	I0914 00:05:21.589986  175349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0914 00:05:21.602430  175349 status.go:257] multinode-241101-m02 status: &{Name:multinode-241101-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:05:21.602467  175349 status.go:255] checking status of multinode-241101-m03 ...
	I0914 00:05:21.602791  175349 cli_runner.go:164] Run: docker container inspect multinode-241101-m03 --format={{.State.Status}}
	I0914 00:05:21.621459  175349 status.go:330] multinode-241101-m03 host status = "Stopped" (err=<nil>)
	I0914 00:05:21.621484  175349 status.go:343] host is not running, skipping remaining checks
	I0914 00:05:21.621493  175349 status.go:257] multinode-241101-m03 status: &{Name:multinode-241101-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.25s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (11.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 node start m03 -v=7 --alsologtostderr
E0914 00:05:22.711418    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-241101 node start m03 -v=7 --alsologtostderr: (10.2643642s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (11.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (96.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-241101
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-241101
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-241101: (22.703964266s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-241101 --wait=true -v=8 --alsologtostderr
E0914 00:06:08.231512    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-241101 --wait=true -v=8 --alsologtostderr: (1m14.101653851s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-241101
--- PASS: TestMultiNode/serial/RestartKeepsNodes (96.95s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-241101 node delete m03: (5.020997635s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.71s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-241101 stop: (21.535171766s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-241101 status: exit status 7 (86.9138ms)

                                                
                                                
-- stdout --
	multinode-241101
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-241101-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-241101 status --alsologtostderr: exit status 7 (98.292476ms)

                                                
                                                
-- stdout --
	multinode-241101
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-241101-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0914 00:07:36.974552  188730 out.go:345] Setting OutFile to fd 1 ...
	I0914 00:07:36.974695  188730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:07:36.974705  188730 out.go:358] Setting ErrFile to fd 2...
	I0914 00:07:36.974711  188730 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0914 00:07:36.974932  188730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19640-2224/.minikube/bin
	I0914 00:07:36.975114  188730 out.go:352] Setting JSON to false
	I0914 00:07:36.975144  188730 mustload.go:65] Loading cluster: multinode-241101
	I0914 00:07:36.975255  188730 notify.go:220] Checking for updates...
	I0914 00:07:36.975554  188730 config.go:182] Loaded profile config "multinode-241101": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0914 00:07:36.975573  188730 status.go:255] checking status of multinode-241101 ...
	I0914 00:07:36.976169  188730 cli_runner.go:164] Run: docker container inspect multinode-241101 --format={{.State.Status}}
	I0914 00:07:36.994459  188730 status.go:330] multinode-241101 host status = "Stopped" (err=<nil>)
	I0914 00:07:36.994482  188730 status.go:343] host is not running, skipping remaining checks
	I0914 00:07:36.994489  188730 status.go:257] multinode-241101 status: &{Name:multinode-241101 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0914 00:07:36.994512  188730 status.go:255] checking status of multinode-241101-m02 ...
	I0914 00:07:36.994834  188730 cli_runner.go:164] Run: docker container inspect multinode-241101-m02 --format={{.State.Status}}
	I0914 00:07:37.027722  188730 status.go:330] multinode-241101-m02 host status = "Stopped" (err=<nil>)
	I0914 00:07:37.027763  188730 status.go:343] host is not running, skipping remaining checks
	I0914 00:07:37.027772  188730 status.go:257] multinode-241101-m02 status: &{Name:multinode-241101-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-241101 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-241101 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (54.308668271s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-241101 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-241101
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-241101-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-241101-m02 --driver=docker  --container-runtime=docker: exit status 14 (81.15139ms)

                                                
                                                
-- stdout --
	* [multinode-241101-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-241101-m02' is duplicated with machine name 'multinode-241101-m02' in profile 'multinode-241101'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-241101-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-241101-m03 --driver=docker  --container-runtime=docker: (35.042077542s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-241101
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-241101: exit status 80 (389.961214ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-241101 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-241101-m03 already exists in multinode-241101-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-241101-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-241101-m03: (2.041773902s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.62s)

                                                
                                    
x
+
TestPreload (137.83s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-346312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0914 00:09:45.146354    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:10:22.710734    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-346312 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m41.137192737s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-346312 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-346312 image pull gcr.io/k8s-minikube/busybox: (2.081523164s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-346312
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-346312: (10.766830637s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-346312 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-346312 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (21.24992509s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-346312 image list
helpers_test.go:175: Cleaning up "test-preload-346312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-346312
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-346312: (2.27263746s)
--- PASS: TestPreload (137.83s)

                                                
                                    
x
+
TestScheduledStopUnix (106.39s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-260082 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-260082 --memory=2048 --driver=docker  --container-runtime=docker: (33.253173822s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-260082 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-260082 -n scheduled-stop-260082
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-260082 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-260082 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-260082 -n scheduled-stop-260082
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-260082
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-260082 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-260082
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-260082: exit status 7 (69.969645ms)

                                                
                                                
-- stdout --
	scheduled-stop-260082
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-260082 -n scheduled-stop-260082
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-260082 -n scheduled-stop-260082: exit status 7 (67.44187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-260082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-260082
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-260082: (1.625050383s)
--- PASS: TestScheduledStopUnix (106.39s)

                                                
                                    
x
+
TestSkaffold (120.31s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1531130112 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-927614 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-927614 --memory=2600 --driver=docker  --container-runtime=docker: (33.882776002s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1531130112 run --minikube-profile skaffold-927614 --kube-context skaffold-927614 --status-check=true --port-forward=false --interactive=false
E0914 00:14:45.146042    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1531130112 run --minikube-profile skaffold-927614 --kube-context skaffold-927614 --status-check=true --port-forward=false --interactive=false: (1m11.081020452s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5db458b867-8pd64" [a35bfa8a-abbe-4d17-a037-47c5c81cc076] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003626292s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-c878b7f48-cm4gg" [00d5d451-f8c0-44e9-bcc4-13f9145f346c] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004211376s
helpers_test.go:175: Cleaning up "skaffold-927614" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-927614
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-927614: (2.921876637s)
--- PASS: TestSkaffold (120.31s)

                                                
                                    
x
+
TestInsufficientStorage (10.63s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-260737 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
E0914 00:15:22.710789    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-260737 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.405598518s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d374263-535f-4ea3-afb6-0d132cf84f09","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-260737] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d73c9c3-e30c-44f8-8e1f-c0dfca8427f1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19640"}}
	{"specversion":"1.0","id":"557ff633-757f-4fc9-9c96-80c0ac8d3726","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"df278010-c8ea-4aa7-b931-10fb24d62e67","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig"}}
	{"specversion":"1.0","id":"6bdf0515-0ca8-4fcf-ba53-80acee9851d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube"}}
	{"specversion":"1.0","id":"0464fcca-defc-4635-bc92-f7568db099e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b60b8496-47d8-43ec-aa23-2fbeadf4a677","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"520b2b7f-7ba7-4ac0-b4d1-a81336f7b492","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"77d27b93-b4cb-4674-a105-2974d7e5927c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"c5ed6dc7-b3ba-4886-8d37-633deaac8164","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"eb2b340e-e9ed-4e65-a532-934043111122","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d0404837-ffb2-400e-8c30-428b026937d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-260737\" primary control-plane node in \"insufficient-storage-260737\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7c59ece8-140f-4673-a9f9-4a629cbc7e00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726243947-19640 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2178f475-2ca7-4acb-93ef-fa96219779f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"266f4f9d-f165-4a27-b5bb-15fbd1a7d089","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-260737 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-260737 --output=json --layout=cluster: exit status 7 (269.006472ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-260737","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-260737","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:15:26.690653  223266 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-260737" does not appear in /home/jenkins/minikube-integration/19640-2224/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-260737 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-260737 --output=json --layout=cluster: exit status 7 (267.384ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-260737","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-260737","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0914 00:15:26.957167  223325 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-260737" does not appear in /home/jenkins/minikube-integration/19640-2224/kubeconfig
	E0914 00:15:26.967285  223325 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/insufficient-storage-260737/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-260737" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-260737
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-260737: (1.690304448s)
--- PASS: TestInsufficientStorage (10.63s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (95.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2485137691 start -p running-upgrade-894208 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0914 00:20:45.060513    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2485137691 start -p running-upgrade-894208 --memory=2200 --vm-driver=docker  --container-runtime=docker: (37.936755801s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-894208 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0914 00:21:26.033473    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-894208 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (54.831236041s)
helpers_test.go:175: Cleaning up "running-upgrade-894208" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-894208
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-894208: (2.286541838s)
--- PASS: TestRunningBinaryUpgrade (95.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-035649 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-035649 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.046357741s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-035649
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-035649: (10.808908774s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-035649 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-035649 status --format={{.Host}}: exit status 7 (67.37296ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-035649 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0914 00:18:25.795081    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-035649 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m42.674934501s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-035649 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-035649 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-035649 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (100.475948ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-035649] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-035649
	    minikube start -p kubernetes-upgrade-035649 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0356492 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-035649 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-035649 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-035649 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (25.211819916s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-035649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-035649
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-035649: (2.450249952s)
--- PASS: TestKubernetesUpgrade (384.46s)

                                                
                                    
x
+
TestMissingContainerUpgrade (165.84s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1653166283 start -p missing-upgrade-544465 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1653166283 start -p missing-upgrade-544465 --memory=2200 --driver=docker  --container-runtime=docker: (1m28.653990325s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-544465
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-544465: (10.360665234s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-544465
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-544465 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-544465 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m3.600385157s)
helpers_test.go:175: Cleaning up "missing-upgrade-544465" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-544465
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-544465: (2.434542934s)
--- PASS: TestMissingContainerUpgrade (165.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (86.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1883063609 start -p stopped-upgrade-358907 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0914 00:19:45.166402    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1883063609 start -p stopped-upgrade-358907 --memory=2200 --vm-driver=docker  --container-runtime=docker: (41.60885917s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1883063609 -p stopped-upgrade-358907 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1883063609 -p stopped-upgrade-358907 stop: (10.97928594s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-358907 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0914 00:20:04.083552    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:04.089947    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:04.101331    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:04.122745    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:04.164153    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:04.245412    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:04.406893    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:04.728491    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:05.370668    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:06.652724    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:09.214731    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:14.336224    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:22.711222    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:20:24.578109    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-358907 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.540499591s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (86.13s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-358907
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-358907: (1.64396621s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.64s)

                                                
                                    
x
+
TestPause/serial/Start (74.24s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-206154 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0914 00:22:47.955610    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:22:48.232989    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-206154 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m14.235137779s)
--- PASS: TestPause/serial/Start (74.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-708776 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-708776 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (81.645077ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-708776] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19640
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19640-2224/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19640-2224/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-708776 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-708776 --driver=docker  --container-runtime=docker: (37.294142528s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-708776 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.88s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-206154 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-206154 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.86513351s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-708776 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-708776 --no-kubernetes --driver=docker  --container-runtime=docker: (15.619902071s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-708776 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-708776 status -o json: exit status 2 (288.269101ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-708776","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-708776
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-708776: (1.742442954s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.65s)

                                                
                                    
x
+
TestPause/serial/Pause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-206154 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.61s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.33s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-206154 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-206154 --output=json --layout=cluster: exit status 2 (328.788587ms)

                                                
                                                
-- stdout --
	{"Name":"pause-206154","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-206154","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.33s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-206154 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.56s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.8s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-206154 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.80s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.12s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-206154 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-206154 --alsologtostderr -v=5: (2.124354524s)
--- PASS: TestPause/serial/DeletePaused (2.12s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (15.252760216s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-206154
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-206154: exit status 1 (19.170678ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-206154: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-708776 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-708776 --no-kubernetes --driver=docker  --container-runtime=docker: (7.216376916s)
--- PASS: TestNoKubernetes/serial/Start (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-708776 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-708776 "sudo systemctl is-active --quiet service kubelet": exit status 1 (349.10808ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.81s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.81s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-708776
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-708776: (1.371170452s)
--- PASS: TestNoKubernetes/serial/Stop (1.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-708776 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-708776 --driver=docker  --container-runtime=docker: (7.981376335s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-708776 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-708776 "sudo systemctl is-active --quiet service kubelet": exit status 1 (342.158854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (44.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (44.792924925s)
--- PASS: TestNetworkPlugins/group/auto/Start (44.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-524176 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-524176 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-96c9r" [54b25398-78c6-4df3-9b12-6c4a7a2d663d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-96c9r" [54b25398-78c6-4df3-9b12-6c4a7a2d663d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004241473s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (57.847724704s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m9.860247693s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-c25gz" [88d1a26b-6f51-4ba8-af30-b3bea2da87cb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005056286s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-524176 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-524176 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mfj54" [dd92b452-dc41-41bb-b31f-697d4cec13a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mfj54" [dd92b452-dc41-41bb-b31f-697d4cec13a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003798967s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (56.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (56.703310721s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (56.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-krkq5" [6b1f0439-8b25-43f8-bde0-8514215d97ad] Running
E0914 00:29:45.147732    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005496354s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-524176 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-524176 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-d9z9g" [b34edd92-d64e-4f6d-9782-52ba9a041232] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-d9z9g" [b34edd92-d64e-4f6d-9782-52ba9a041232] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.023743048s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-524176 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-524176 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4mphr" [47269ac4-5cbd-44e3-9e57-a9657a637eac] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4mphr" [47269ac4-5cbd-44e3-9e57-a9657a637eac] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004225873s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (50.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (50.517785196s)
--- PASS: TestNetworkPlugins/group/false/Start (50.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (63.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m3.305263342s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (63.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-524176 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (13.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-524176 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cvqmj" [122e499e-e7b1-4496-9010-5e44844093d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cvqmj" [122e499e-e7b1-4496-9010-5e44844093d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 13.004372066s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (13.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m17.552621997s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-vjh9r" [3328c7e1-1f83-4ded-b822-f1d7a20a8520] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004639854s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-524176 "pgrep -a kubelet"
E0914 00:32:11.787764    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:32:11.796940    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:32:11.808247    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:32:11.832593    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:32:11.878557    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:32:11.964412    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-524176 replace --force -f testdata/netcat-deployment.yaml
E0914 00:32:12.126657    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g84kw" [5917a3ca-4305-4a81-abbe-0f4d446465d5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0914 00:32:12.448508    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:32:13.089995    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:32:14.371407    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:32:16.933042    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-g84kw" [5917a3ca-4305-4a81-abbe-0f4d446465d5] Running
E0914 00:32:22.054463    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004715268s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (74.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0914 00:32:52.778400    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m14.437805301s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (74.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-524176 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-524176 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fkl49" [43f535fd-93ef-4417-b667-6125737b4b90] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fkl49" [43f535fd-93ef-4417-b667-6125737b4b90] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003258509s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (78.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0914 00:33:49.180492    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:33:59.421959    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-524176 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m18.916275656s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (78.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-524176 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-524176 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-s9px9" [ae3f8e4e-9d22-4c49-b9bf-1e713f9cc3a0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-s9px9" [ae3f8e4e-9d22-4c49-b9bf-1e713f9cc3a0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004145837s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (153.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-110538 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0914 00:34:42.517115    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:42.523563    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:42.546332    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:42.567871    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:42.609258    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:42.690700    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:42.852884    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:43.175046    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:43.816401    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:45.103069    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:45.146101    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:47.665166    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:52.787375    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:34:55.661777    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:00.865540    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:03.028922    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:04.082765    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:35:05.797027    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-110538 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m33.417016003s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (153.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-524176 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-524176 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vt5ct" [c73434e8-822f-46a0-bad0-24d675bcc98f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vt5ct" [c73434e8-822f-46a0-bad0-24d675bcc98f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003837319s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-524176 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-524176 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.24s)
E0914 00:47:05.714754    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:47:09.826031    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:47:11.788268    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:47:17.198523    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:47:37.526850    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:47:58.160940    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (55.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-940462 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:35:44.298778    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/custom-flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:04.472521    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:04.780673    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/custom-flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:19.144465    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:19.150822    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:19.162165    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:19.183441    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:19.224804    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:19.306408    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:19.468000    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:19.789495    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:20.431152    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:21.712874    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:22.787093    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:24.274178    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:27.159245    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:36:29.395839    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-940462 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (55.805654631s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (55.81s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-940462 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [24da97a9-8ffb-4ed0-b3af-65f64231bfc6] Pending
helpers_test.go:344: "busybox" [24da97a9-8ffb-4ed0-b3af-65f64231bfc6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [24da97a9-8ffb-4ed0-b3af-65f64231bfc6] Running
E0914 00:36:39.637170    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.008933352s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-940462 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-940462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0914 00:36:45.742174    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/custom-flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-940462 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-940462 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-940462 --alsologtostderr -v=3: (11.089882004s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-940462 -n no-preload-940462
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-940462 -n no-preload-940462: exit status 7 (101.766279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-940462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (291.45s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-940462 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:37:00.123473    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:05.714383    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:05.720705    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:05.732152    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:05.754190    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:05.795890    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:05.877529    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:06.038936    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:06.360501    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:07.003132    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:08.284719    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-940462 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m51.101690409s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-940462 -n no-preload-940462
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (291.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-110538 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d284d19-e53d-462d-acfe-1cb6b75f4b9e] Pending
helpers_test.go:344: "busybox" [4d284d19-e53d-462d-acfe-1cb6b75f4b9e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0914 00:37:10.846299    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:11.788316    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [4d284d19-e53d-462d-acfe-1cb6b75f4b9e] Running
E0914 00:37:15.968170    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004604088s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-110538 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-110538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-110538 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.150139221s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-110538 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-110538 --alsologtostderr -v=3
E0914 00:37:26.209763    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:26.394596    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-110538 --alsologtostderr -v=3: (11.187121931s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110538 -n old-k8s-version-110538
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110538 -n old-k8s-version-110538: exit status 7 (75.237156ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-110538 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (131.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-110538 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0914 00:37:39.503375    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:41.085476    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:37:46.692264    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:07.663514    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/custom-flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:15.201680    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:15.208000    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:15.219323    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:15.240655    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:15.281991    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:15.363367    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:15.524776    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:15.846363    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:16.488485    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:17.770604    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:20.332514    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:25.454879    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:27.654132    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:35.696421    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:38.922581    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:38:56.178166    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:02.029293    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:02.035814    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:02.047216    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:02.068659    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:02.110139    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:02.191613    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:02.353171    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:02.674749    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:03.007805    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:03.316775    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:04.598789    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:06.628818    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:07.160464    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:12.282548    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:22.524196    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:28.234726    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:37.139527    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:39:42.516959    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-110538 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m10.911095029s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-110538 -n old-k8s-version-110538
E0914 00:39:43.008050    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (131.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4gxl8" [6897c8ac-ca1e-4d9b-a108-5063515c53b7] Running
E0914 00:39:45.146303    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003440408s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-4gxl8" [6897c8ac-ca1e-4d9b-a108-5063515c53b7] Running
E0914 00:39:49.575688    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00354369s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-110538 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-110538 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-110538 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-110538 -n old-k8s-version-110538
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-110538 -n old-k8s-version-110538: exit status 2 (316.580824ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-110538 -n old-k8s-version-110538
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-110538 -n old-k8s-version-110538: exit status 2 (316.857582ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-110538 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-110538 -n old-k8s-version-110538
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-110538 -n old-k8s-version-110538
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-810882 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:40:04.083580    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:07.870144    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:07.876471    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:07.887822    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:07.909153    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:07.951236    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:08.032563    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:08.193813    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:08.521102    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:09.162784    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:10.236323    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:10.444269    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:13.005950    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:18.128501    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:22.711512    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:23.804417    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/custom-flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:23.969760    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:28.369839    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-810882 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (45.330315744s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-810882 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2bf111bf-efc3-4e02-b297-861cd6a2bd44] Pending
helpers_test.go:344: "busybox" [2bf111bf-efc3-4e02-b297-861cd6a2bd44] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2bf111bf-efc3-4e02-b297-861cd6a2bd44] Running
E0914 00:40:48.851338    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:40:51.505532    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/custom-flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003545309s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-810882 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-810882 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-810882 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-810882 --alsologtostderr -v=3
E0914 00:40:59.060852    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-810882 --alsologtostderr -v=3: (11.043160985s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-810882 -n embed-certs-810882
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-810882 -n embed-certs-810882: exit status 7 (72.152229ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-810882 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (268.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-810882 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:41:19.145069    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:41:29.813439    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:41:45.891222    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:41:46.849921    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-810882 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m27.718857887s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-810882 -n embed-certs-810882
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (268.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-v7hx2" [f47026ac-c7d5-47a9-82c0-c3f34e981c81] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004348121s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-v7hx2" [f47026ac-c7d5-47a9-82c0-c3f34e981c81] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.019262197s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-940462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-940462 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-940462 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-940462 -n no-preload-940462
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-940462 -n no-preload-940462: exit status 2 (340.571351ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-940462 -n no-preload-940462
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-940462 -n no-preload-940462: exit status 2 (322.22682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-940462 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-940462 -n no-preload-940462
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-940462 -n no-preload-940462
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-083413 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:42:09.825492    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:09.831829    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:09.843180    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:09.864677    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:09.906072    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:09.987717    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:10.149290    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:10.471194    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:11.113246    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:11.787724    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/auto-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:12.394676    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:14.957412    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:20.079098    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:30.320572    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:33.417049    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kindnet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:50.802332    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:42:51.735239    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:43:15.201082    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-083413 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m18.255486771s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (78.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-083413 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [75ec3c9d-592d-482e-bb76-f55af5651f61] Pending
helpers_test.go:344: "busybox" [75ec3c9d-592d-482e-bb76-f55af5651f61] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [75ec3c9d-592d-482e-bb76-f55af5651f61] Running
E0914 00:43:31.763931    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004557911s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-083413 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-083413 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-083413 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-083413 --alsologtostderr -v=3
E0914 00:43:38.923078    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:43:42.902711    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-083413 --alsologtostderr -v=3: (11.198645509s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413: exit status 7 (73.119052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-083413 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-083413 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:44:02.028880    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:44:29.733276    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/enable-default-cni-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:44:42.517403    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/calico-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:44:45.145986    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/functional-657116/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:44:53.685387    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/old-k8s-version-110538/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:45:04.082619    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/skaffold-927614/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:45:07.869885    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:45:22.710552    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/addons-467916/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:45:23.804470    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/custom-flannel-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-083413 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.701083159s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wscd5" [43f0bf33-ac88-44c5-904b-5fa7fa83f98b] Running
E0914 00:45:35.577185    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/kubenet-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004464291s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-wscd5" [43f0bf33-ac88-44c5-904b-5fa7fa83f98b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.008269204s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-810882 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-810882 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-810882 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-810882 -n embed-certs-810882
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-810882 -n embed-certs-810882: exit status 2 (328.496112ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-810882 -n embed-certs-810882
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-810882 -n embed-certs-810882: exit status 2 (311.57074ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-810882 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-810882 -n embed-certs-810882
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-810882 -n embed-certs-810882
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-620117 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:46:19.144197    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/false-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-620117 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (37.453178443s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-620117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-620117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.256970203s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-620117 --alsologtostderr -v=3
E0914 00:46:36.222487    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:36.228966    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:36.240375    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:36.261868    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:36.303388    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:36.384933    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:36.546804    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:36.868733    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:37.510571    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:38.792771    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-620117 --alsologtostderr -v=3: (9.554386894s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (9.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-620117 -n newest-cni-620117
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-620117 -n newest-cni-620117: exit status 7 (69.728457ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-620117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-620117 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0914 00:46:41.354393    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:46.475892    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
E0914 00:46:56.717207    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/no-preload-940462/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-620117 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (17.874591573s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-620117 -n newest-cni-620117
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-620117 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-620117 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-620117 -n newest-cni-620117
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-620117 -n newest-cni-620117: exit status 2 (330.446413ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-620117 -n newest-cni-620117
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-620117 -n newest-cni-620117: exit status 2 (324.765144ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-620117 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-620117 -n newest-cni-620117
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-620117 -n newest-cni-620117
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5fxx6" [b04c08de-9c88-4436-b110-31b17463e99b] Running
E0914 00:48:15.201504    7536 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/bridge-524176/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003785726s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-5fxx6" [b04c08de-9c88-4436-b110-31b17463e99b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003325347s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-083413 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-083413 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-083413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413: exit status 2 (315.626801ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413: exit status 2 (312.729412ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-083413 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-083413 -n default-k8s-diff-port-083413
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.67s)

                                                
                                    

Test skip (24/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-757156 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-757156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-757156
--- SKIP: TestDownloadOnlyKic (0.51s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-524176 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-524176" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19640-2224/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 14 Sep 2024 00:16:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: offline-docker-973915
contexts:
- context:
cluster: offline-docker-973915
extensions:
- extension:
last-update: Sat, 14 Sep 2024 00:16:17 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: offline-docker-973915
name: offline-docker-973915
current-context: offline-docker-973915
kind: Config
preferences: {}
users:
- name: offline-docker-973915
user:
client-certificate: /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/offline-docker-973915/client.crt
client-key: /home/jenkins/minikube-integration/19640-2224/.minikube/profiles/offline-docker-973915/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-524176

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-524176" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-524176"

                                                
                                                
----------------------- debugLogs end: cilium-524176 [took: 5.24294718s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-524176" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-524176
--- SKIP: TestNetworkPlugins/group/cilium (5.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-474290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-474290
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
Copied to clipboard