Test Report: Docker_Linux_docker_arm64 19734

                    
                      795b96072c2ea51545c2bdfc984dcdf8fe273799:2024-09-30:36435
                    
                

Test fail (1/342)

Order failed test Duration
33 TestAddons/parallel/Registry 75.49
x
+
TestAddons/parallel/Registry (75.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.282719ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-rdvzj" [1071ed50-a346-48af-bd60-fb6e526e1d58] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.005386312s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-ggxvp" [a0c7860c-3f6b-40f2-9761-cd6466b5e812] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003671686s
addons_test.go:338: (dbg) Run:  kubectl --context addons-703944 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-703944 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-703944 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.11131623s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-703944 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 ip
2024/09/30 10:34:15 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-703944
helpers_test.go:235: (dbg) docker inspect addons-703944:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3",
	        "Created": "2024-09-30T10:21:01.950380753Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8882,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-30T10:21:02.101402153Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:62002f6a97ad1f6cd4117c29b1c488a6bf3b6255c8231f0d600b1bc7ba1bcfd6",
	        "ResolvConfPath": "/var/lib/docker/containers/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3/hosts",
	        "LogPath": "/var/lib/docker/containers/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3/6ba2c206eb4f2d043dea83743b508d555e3d81e2d941c966c85e28de78318fe3-json.log",
	        "Name": "/addons-703944",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-703944:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-703944",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/48d713368b11c5b833536f4955fb2777b0b4f3c415103a3489b8c507b01e3341-init/diff:/var/lib/docker/overlay2/617a358269990fa6af831f14aa0a1cf249355fc559e21616870630a688e89f21/diff",
	                "MergedDir": "/var/lib/docker/overlay2/48d713368b11c5b833536f4955fb2777b0b4f3c415103a3489b8c507b01e3341/merged",
	                "UpperDir": "/var/lib/docker/overlay2/48d713368b11c5b833536f4955fb2777b0b4f3c415103a3489b8c507b01e3341/diff",
	                "WorkDir": "/var/lib/docker/overlay2/48d713368b11c5b833536f4955fb2777b0b4f3c415103a3489b8c507b01e3341/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-703944",
	                "Source": "/var/lib/docker/volumes/addons-703944/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-703944",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-703944",
	                "name.minikube.sigs.k8s.io": "addons-703944",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "85d364dc746bc1ce06d8b03501ac5a967ba05830aa47aff44bcf1bc33f7e0da3",
	            "SandboxKey": "/var/run/docker/netns/85d364dc746b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-703944": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f903481ea82c7fe80c306cb66548f367f308b7e33d8f02c92e4a74c877559ea7",
	                    "EndpointID": "dbea6e478a8d244a877708b0f077cd418ec819855b0b951a50fe93ad9f76343c",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-703944",
	                        "6ba2c206eb4f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-703944 -n addons-703944
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 logs -n 25: (1.157087331s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-464574   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p download-only-464574                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p download-only-464574                                                                     | download-only-464574   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | -o=json --download-only                                                                     | download-only-328857   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p download-only-328857                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p download-only-328857                                                                     | download-only-328857   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p download-only-464574                                                                     | download-only-464574   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p download-only-328857                                                                     | download-only-328857   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | --download-only -p                                                                          | download-docker-398252 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | download-docker-398252                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p download-docker-398252                                                                   | download-docker-398252 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-159609   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | binary-mirror-159609                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34175                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-159609                                                                     | binary-mirror-159609   | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| addons  | disable dashboard -p                                                                        | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | addons-703944                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | addons-703944                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-703944 --wait=true                                                                | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:24 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-703944 addons disable                                                                | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:24 UTC | 30 Sep 24 10:24 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-703944 addons disable                                                                | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-703944 addons                                                                        | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-703944 addons                                                                        | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:33 UTC | 30 Sep 24 10:33 UTC |
	|         | -p addons-703944                                                                            |                        |         |         |                     |                     |
	| ssh     | addons-703944 ssh cat                                                                       | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:34 UTC | 30 Sep 24 10:34 UTC |
	|         | /opt/local-path-provisioner/pvc-c80c0af4-a393-4a05-9c4d-cc7ecf4f0af4_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-703944 addons disable                                                                | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:34 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-703944 ip                                                                            | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:34 UTC | 30 Sep 24 10:34 UTC |
	| addons  | addons-703944 addons disable                                                                | addons-703944          | jenkins | v1.34.0 | 30 Sep 24 10:34 UTC | 30 Sep 24 10:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:38
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:38.157760    8372 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:38.157902    8372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:38.157934    8372 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:38.157953    8372 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:38.158680    8372 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	I0930 10:20:38.159214    8372 out.go:352] Setting JSON to false
	I0930 10:20:38.160048    8372 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":187,"bootTime":1727691452,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0930 10:20:38.160118    8372 start.go:139] virtualization:  
	I0930 10:20:38.165835    8372 out.go:177] * [addons-703944] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:20:38.177032    8372 notify.go:220] Checking for updates...
	I0930 10:20:38.199157    8372 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:20:38.222607    8372 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:38.239899    8372 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	I0930 10:20:38.255011    8372 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	I0930 10:20:38.266973    8372 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:20:38.277170    8372 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:20:38.286468    8372 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:20:38.306533    8372 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:20:38.306691    8372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:38.363805    8372 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:20:38.354707309 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:20:38.363917    8372 docker.go:318] overlay module found
	I0930 10:20:38.393744    8372 out.go:177] * Using the docker driver based on user configuration
	I0930 10:20:38.421072    8372 start.go:297] selected driver: docker
	I0930 10:20:38.421097    8372 start.go:901] validating driver "docker" against <nil>
	I0930 10:20:38.421112    8372 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:20:38.421739    8372 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:38.481156    8372 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-30 10:20:38.472339623 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:20:38.481370    8372 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:20:38.481604    8372 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:20:38.489171    8372 out.go:177] * Using Docker driver with root privileges
	I0930 10:20:38.500672    8372 cni.go:84] Creating CNI manager for ""
	I0930 10:20:38.500756    8372 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:20:38.500776    8372 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0930 10:20:38.500861    8372 start.go:340] cluster config:
	{Name:addons-703944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock:
SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:20:38.513247    8372 out.go:177] * Starting "addons-703944" primary control-plane node in "addons-703944" cluster
	I0930 10:20:38.521393    8372 cache.go:121] Beginning downloading kic base image for docker with docker
	I0930 10:20:38.529772    8372 out.go:177] * Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:20:38.538499    8372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 10:20:38.538551    8372 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
	I0930 10:20:38.538563    8372 cache.go:56] Caching tarball of preloaded images
	I0930 10:20:38.538593    8372 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:20:38.538643    8372 preload.go:172] Found /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0930 10:20:38.538653    8372 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on docker
	I0930 10:20:38.539014    8372 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/config.json ...
	I0930 10:20:38.539094    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/config.json: {Name:mk3b2c38eac4f5deeba0c330b8da3185b9a33420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:20:38.554140    8372 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:20:38.554242    8372 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:20:38.554259    8372 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory, skipping pull
	I0930 10:20:38.554263    8372 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 exists in cache, skipping pull
	I0930 10:20:38.554270    8372 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 as a tarball
	I0930 10:20:38.554275    8372 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from local cache
	I0930 10:20:54.942038    8372 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 from cached tarball
	I0930 10:20:54.942078    8372 cache.go:194] Successfully downloaded all kic artifacts
	I0930 10:20:54.942116    8372 start.go:360] acquireMachinesLock for addons-703944: {Name:mk960c67440ef6a65350b6922242ffb4f2c250f7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0930 10:20:54.942234    8372 start.go:364] duration metric: took 97.852µs to acquireMachinesLock for "addons-703944"
	I0930 10:20:54.942277    8372 start.go:93] Provisioning new machine with config: &{Name:addons-703944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 10:20:54.942348    8372 start.go:125] createHost starting for "" (driver="docker")
	I0930 10:20:54.944863    8372 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0930 10:20:54.945093    8372 start.go:159] libmachine.API.Create for "addons-703944" (driver="docker")
	I0930 10:20:54.945129    8372 client.go:168] LocalClient.Create starting
	I0930 10:20:54.945254    8372 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem
	I0930 10:20:55.810310    8372 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/cert.pem
	I0930 10:20:55.977860    8372 cli_runner.go:164] Run: docker network inspect addons-703944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0930 10:20:55.993083    8372 cli_runner.go:211] docker network inspect addons-703944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0930 10:20:55.993171    8372 network_create.go:284] running [docker network inspect addons-703944] to gather additional debugging logs...
	I0930 10:20:55.993192    8372 cli_runner.go:164] Run: docker network inspect addons-703944
	W0930 10:20:56.007308    8372 cli_runner.go:211] docker network inspect addons-703944 returned with exit code 1
	I0930 10:20:56.007343    8372 network_create.go:287] error running [docker network inspect addons-703944]: docker network inspect addons-703944: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-703944 not found
	I0930 10:20:56.007356    8372 network_create.go:289] output of [docker network inspect addons-703944]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-703944 not found
	
	** /stderr **
	I0930 10:20:56.007475    8372 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:20:56.024062    8372 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017d0fe0}
	I0930 10:20:56.024114    8372 network_create.go:124] attempt to create docker network addons-703944 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0930 10:20:56.024168    8372 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-703944 addons-703944
	I0930 10:20:56.089395    8372 network_create.go:108] docker network addons-703944 192.168.49.0/24 created
	I0930 10:20:56.089428    8372 kic.go:121] calculated static IP "192.168.49.2" for the "addons-703944" container
	I0930 10:20:56.089497    8372 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0930 10:20:56.103963    8372 cli_runner.go:164] Run: docker volume create addons-703944 --label name.minikube.sigs.k8s.io=addons-703944 --label created_by.minikube.sigs.k8s.io=true
	I0930 10:20:56.121739    8372 oci.go:103] Successfully created a docker volume addons-703944
	I0930 10:20:56.121829    8372 cli_runner.go:164] Run: docker run --rm --name addons-703944-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703944 --entrypoint /usr/bin/test -v addons-703944:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib
	I0930 10:20:58.244441    8372 cli_runner.go:217] Completed: docker run --rm --name addons-703944-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703944 --entrypoint /usr/bin/test -v addons-703944:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -d /var/lib: (2.122562309s)
	I0930 10:20:58.244468    8372 oci.go:107] Successfully prepared a docker volume addons-703944
	I0930 10:20:58.244490    8372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 10:20:58.244511    8372 kic.go:194] Starting extracting preloaded images to volume ...
	I0930 10:20:58.244586    8372 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-703944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir
	I0930 10:21:01.890711    8372 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-703944:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 -I lz4 -xf /preloaded.tar -C /extractDir: (3.646088019s)
	I0930 10:21:01.890739    8372 kic.go:203] duration metric: took 3.646226191s to extract preloaded images to volume ...
	W0930 10:21:01.890884    8372 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0930 10:21:01.891005    8372 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0930 10:21:01.936298    8372 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-703944 --name addons-703944 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-703944 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-703944 --network addons-703944 --ip 192.168.49.2 --volume addons-703944:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21
	I0930 10:21:02.265370    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Running}}
	I0930 10:21:02.292978    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:02.315213    8372 cli_runner.go:164] Run: docker exec addons-703944 stat /var/lib/dpkg/alternatives/iptables
	I0930 10:21:02.377492    8372 oci.go:144] the created container "addons-703944" has a running status.
	I0930 10:21:02.377520    8372 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa...
	I0930 10:21:03.309591    8372 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0930 10:21:03.342119    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:03.358416    8372 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0930 10:21:03.358434    8372 kic_runner.go:114] Args: [docker exec --privileged addons-703944 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0930 10:21:03.409034    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:03.425135    8372 machine.go:93] provisionDockerMachine start ...
	I0930 10:21:03.425219    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:03.441464    8372 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:03.441744    8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:21:03.441754    8372 main.go:141] libmachine: About to run SSH command:
	hostname
	I0930 10:21:03.566472    8372 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-703944
	
	I0930 10:21:03.566496    8372 ubuntu.go:169] provisioning hostname "addons-703944"
	I0930 10:21:03.566558    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:03.583023    8372 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:03.583250    8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:21:03.583269    8372 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-703944 && echo "addons-703944" | sudo tee /etc/hostname
	I0930 10:21:03.722513    8372 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-703944
	
	I0930 10:21:03.722665    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:03.740624    8372 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:03.740863    8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:21:03.740887    8372 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-703944' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-703944/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-703944' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0930 10:21:03.871108    8372 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0930 10:21:03.871132    8372 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19734-2285/.minikube CaCertPath:/home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19734-2285/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19734-2285/.minikube}
	I0930 10:21:03.871160    8372 ubuntu.go:177] setting up certificates
	I0930 10:21:03.871172    8372 provision.go:84] configureAuth start
	I0930 10:21:03.871235    8372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703944
	I0930 10:21:03.887464    8372 provision.go:143] copyHostCerts
	I0930 10:21:03.887567    8372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19734-2285/.minikube/ca.pem (1082 bytes)
	I0930 10:21:03.887703    8372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19734-2285/.minikube/cert.pem (1123 bytes)
	I0930 10:21:03.887764    8372 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19734-2285/.minikube/key.pem (1679 bytes)
	I0930 10:21:03.887816    8372 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19734-2285/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca-key.pem org=jenkins.addons-703944 san=[127.0.0.1 192.168.49.2 addons-703944 localhost minikube]
	I0930 10:21:04.203465    8372 provision.go:177] copyRemoteCerts
	I0930 10:21:04.203529    8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0930 10:21:04.203602    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:04.219315    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:04.311934    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0930 10:21:04.334878    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0930 10:21:04.357437    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0930 10:21:04.380013    8372 provision.go:87] duration metric: took 508.828698ms to configureAuth
	I0930 10:21:04.380041    8372 ubuntu.go:193] setting minikube options for container-runtime
	I0930 10:21:04.380227    8372 config.go:182] Loaded profile config "addons-703944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:21:04.380283    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:04.396140    8372 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:04.396380    8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:21:04.396398    8372 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0930 10:21:04.523628    8372 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0930 10:21:04.523691    8372 ubuntu.go:71] root file system type: overlay
	I0930 10:21:04.523826    8372 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0930 10:21:04.523894    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:04.540175    8372 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:04.540416    8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:21:04.540498    8372 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0930 10:21:04.678175    8372 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0930 10:21:04.678267    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:04.695200    8372 main.go:141] libmachine: Using SSH client type: native
	I0930 10:21:04.695446    8372 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x413650] 0x415e90 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0930 10:21:04.695472    8372 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0930 10:21:05.431890    8372 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-09-20 11:39:18.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-09-30 10:21:04.672933741 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0930 10:21:05.431926    8372 machine.go:96] duration metric: took 2.006772955s to provisionDockerMachine
	I0930 10:21:05.431955    8372 client.go:171] duration metric: took 10.486795644s to LocalClient.Create
	I0930 10:21:05.431977    8372 start.go:167] duration metric: took 10.48688462s to libmachine.API.Create "addons-703944"
	I0930 10:21:05.431989    8372 start.go:293] postStartSetup for "addons-703944" (driver="docker")
	I0930 10:21:05.431999    8372 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0930 10:21:05.432073    8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0930 10:21:05.432117    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:05.448707    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:05.540917    8372 ssh_runner.go:195] Run: cat /etc/os-release
	I0930 10:21:05.544101    8372 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0930 10:21:05.544136    8372 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0930 10:21:05.544147    8372 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0930 10:21:05.544154    8372 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0930 10:21:05.544165    8372 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2285/.minikube/addons for local assets ...
	I0930 10:21:05.544235    8372 filesync.go:126] Scanning /home/jenkins/minikube-integration/19734-2285/.minikube/files for local assets ...
	I0930 10:21:05.544257    8372 start.go:296] duration metric: took 112.262994ms for postStartSetup
	I0930 10:21:05.544570    8372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703944
	I0930 10:21:05.562225    8372 profile.go:143] Saving config to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/config.json ...
	I0930 10:21:05.562514    8372 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:21:05.562556    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:05.579501    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:05.668448    8372 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0930 10:21:05.673159    8372 start.go:128] duration metric: took 10.730792488s to createHost
	I0930 10:21:05.673186    8372 start.go:83] releasing machines lock for "addons-703944", held for 10.730936813s
	I0930 10:21:05.673275    8372 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-703944
	I0930 10:21:05.690651    8372 ssh_runner.go:195] Run: cat /version.json
	I0930 10:21:05.690673    8372 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0930 10:21:05.690702    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:05.690743    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:05.714372    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:05.715110    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:05.928828    8372 ssh_runner.go:195] Run: systemctl --version
	I0930 10:21:05.933048    8372 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0930 10:21:05.937102    8372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0930 10:21:05.961718    8372 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0930 10:21:05.961796    8372 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0930 10:21:05.989449    8372 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0930 10:21:05.989475    8372 start.go:495] detecting cgroup driver to use...
	I0930 10:21:05.989531    8372 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:21:05.989646    8372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:21:06.005791    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0930 10:21:06.015917    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0930 10:21:06.026026    8372 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0930 10:21:06.026099    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0930 10:21:06.036080    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 10:21:06.045847    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0930 10:21:06.055760    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0930 10:21:06.065113    8372 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0930 10:21:06.074082    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0930 10:21:06.083606    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0930 10:21:06.092916    8372 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0930 10:21:06.102335    8372 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0930 10:21:06.110555    8372 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0930 10:21:06.110617    8372 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0930 10:21:06.123769    8372 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0930 10:21:06.133124    8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:06.213493    8372 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0930 10:21:06.310878    8372 start.go:495] detecting cgroup driver to use...
	I0930 10:21:06.310970    8372 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0930 10:21:06.311038    8372 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0930 10:21:06.323620    8372 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0930 10:21:06.323743    8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0930 10:21:06.336518    8372 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0930 10:21:06.352700    8372 ssh_runner.go:195] Run: which cri-dockerd
	I0930 10:21:06.356641    8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0930 10:21:06.370675    8372 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0930 10:21:06.393790    8372 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0930 10:21:06.495078    8372 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0930 10:21:06.592988    8372 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0930 10:21:06.593192    8372 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0930 10:21:06.612228    8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:06.702672    8372 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0930 10:21:06.966149    8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0930 10:21:06.978367    8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 10:21:06.990181    8372 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0930 10:21:07.087304    8372 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0930 10:21:07.174889    8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:07.264638    8372 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0930 10:21:07.278180    8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0930 10:21:07.289029    8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:07.374836    8372 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0930 10:21:07.440462    8372 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0930 10:21:07.440610    8372 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0930 10:21:07.444403    8372 start.go:563] Will wait 60s for crictl version
	I0930 10:21:07.444498    8372 ssh_runner.go:195] Run: which crictl
	I0930 10:21:07.447529    8372 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0930 10:21:07.487388    8372 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.3.1
	RuntimeApiVersion:  v1
	I0930 10:21:07.487500    8372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 10:21:07.510098    8372 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0930 10:21:07.535171    8372 out.go:235] * Preparing Kubernetes v1.31.1 on Docker 27.3.1 ...
	I0930 10:21:07.535262    8372 cli_runner.go:164] Run: docker network inspect addons-703944 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0930 10:21:07.550126    8372 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0930 10:21:07.554591    8372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:21:07.564545    8372 kubeadm.go:883] updating cluster {Name:addons-703944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuF
irmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0930 10:21:07.564653    8372 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
	I0930 10:21:07.564712    8372 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 10:21:07.581874    8372 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 10:21:07.581896    8372 docker.go:615] Images already preloaded, skipping extraction
	I0930 10:21:07.581957    8372 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0930 10:21:07.597772    8372 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.1
	registry.k8s.io/kube-scheduler:v1.31.1
	registry.k8s.io/kube-controller-manager:v1.31.1
	registry.k8s.io/kube-proxy:v1.31.1
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/etcd:3.5.15-0
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0930 10:21:07.597796    8372 cache_images.go:84] Images are preloaded, skipping loading
	I0930 10:21:07.597806    8372 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 docker true true} ...
	I0930 10:21:07.597894    8372 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-703944 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0930 10:21:07.597965    8372 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0930 10:21:07.637730    8372 cni.go:84] Creating CNI manager for ""
	I0930 10:21:07.637754    8372 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:21:07.637767    8372 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0930 10:21:07.637785    8372 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-703944 NodeName:addons-703944 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0930 10:21:07.637918    8372 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-703944"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0930 10:21:07.637981    8372 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0930 10:21:07.646415    8372 binaries.go:44] Found k8s binaries, skipping transfer
	I0930 10:21:07.646481    8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0930 10:21:07.654646    8372 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0930 10:21:07.671757    8372 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0930 10:21:07.688999    8372 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0930 10:21:07.706483    8372 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0930 10:21:07.709896    8372 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0930 10:21:07.720493    8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:07.806936    8372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:21:07.821569    8372 certs.go:68] Setting up /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944 for IP: 192.168.49.2
	I0930 10:21:07.821593    8372 certs.go:194] generating shared ca certs ...
	I0930 10:21:07.821608    8372 certs.go:226] acquiring lock for ca certs: {Name:mkc88472a42ce604780a44bea1d376b9310242a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:07.821794    8372 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19734-2285/.minikube/ca.key
	I0930 10:21:08.354917    8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt ...
	I0930 10:21:08.354948    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt: {Name:mk0122201555ccaf3ca9f01ed4cca7b90ae5dd97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:08.355149    8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/ca.key ...
	I0930 10:21:08.355163    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/ca.key: {Name:mk2840bc2a90336af3902da6afe8ca59e0524fbc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:08.355246    8372 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.key
	I0930 10:21:08.976758    8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.crt ...
	I0930 10:21:08.976790    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.crt: {Name:mkf7e924be88e949e3d1ab2bf1b7abc89be2b043 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:08.976968    8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.key ...
	I0930 10:21:08.976982    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.key: {Name:mk62a332fb42a376154b13d7505da29694ef318f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:08.977065    8372 certs.go:256] generating profile certs ...
	I0930 10:21:08.977132    8372 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.key
	I0930 10:21:08.977150    8372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt with IP's: []
	I0930 10:21:09.383494    8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt ...
	I0930 10:21:09.383524    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: {Name:mk9569155d0419e7620e5d2199494fc166cba673 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:09.383713    8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.key ...
	I0930 10:21:09.383725    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.key: {Name:mk3fffa1cbe5623ed803ed09d54abece76021bb1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:09.383805    8372 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key.2f424ac3
	I0930 10:21:09.383825    8372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt.2f424ac3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0930 10:21:09.873500    8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt.2f424ac3 ...
	I0930 10:21:09.873532    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt.2f424ac3: {Name:mk80d304c28e346d7d2e04279240ca0c4b77a39d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:09.873703    8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key.2f424ac3 ...
	I0930 10:21:09.873720    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key.2f424ac3: {Name:mk3f80cc3300ee08b20cc8b5409dde06169ea865 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:09.873802    8372 certs.go:381] copying /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt.2f424ac3 -> /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt
	I0930 10:21:09.873882    8372 certs.go:385] copying /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key.2f424ac3 -> /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key
	I0930 10:21:09.873939    8372 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.key
	I0930 10:21:09.873959    8372 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.crt with IP's: []
	I0930 10:21:10.129164    8372 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.crt ...
	I0930 10:21:10.129194    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.crt: {Name:mk6e3c176c4ef0ca48e94a7ac5538637829aba39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:10.129369    8372 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.key ...
	I0930 10:21:10.129381    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.key: {Name:mk11d8a42c8c8407475e87c9983b10099aac5b48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:10.129572    8372 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca-key.pem (1679 bytes)
	I0930 10:21:10.129615    8372 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/ca.pem (1082 bytes)
	I0930 10:21:10.129646    8372 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/cert.pem (1123 bytes)
	I0930 10:21:10.129675    8372 certs.go:484] found cert: /home/jenkins/minikube-integration/19734-2285/.minikube/certs/key.pem (1679 bytes)
	I0930 10:21:10.130263    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0930 10:21:10.155189    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0930 10:21:10.179371    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0930 10:21:10.203231    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0930 10:21:10.225986    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0930 10:21:10.249634    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0930 10:21:10.271747    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0930 10:21:10.294496    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0930 10:21:10.316969    8372 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0930 10:21:10.340181    8372 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0930 10:21:10.356666    8372 ssh_runner.go:195] Run: openssl version
	I0930 10:21:10.361834    8372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0930 10:21:10.371169    8372 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:10.374489    8372 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 30 10:21 /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:10.374566    8372 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0930 10:21:10.380955    8372 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0930 10:21:10.389832    8372 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0930 10:21:10.392773    8372 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0930 10:21:10.392818    8372 kubeadm.go:392] StartCluster: {Name:addons-703944 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-703944 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:21:10.392941    8372 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0930 10:21:10.409630    8372 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0930 10:21:10.417798    8372 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0930 10:21:10.425727    8372 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0930 10:21:10.425788    8372 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0930 10:21:10.434141    8372 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0930 10:21:10.434161    8372 kubeadm.go:157] found existing configuration files:
	
	I0930 10:21:10.434211    8372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0930 10:21:10.442852    8372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0930 10:21:10.442913    8372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0930 10:21:10.451147    8372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0930 10:21:10.459727    8372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0930 10:21:10.459811    8372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0930 10:21:10.468160    8372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0930 10:21:10.476841    8372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0930 10:21:10.476925    8372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0930 10:21:10.484925    8372 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0930 10:21:10.493068    8372 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0930 10:21:10.493180    8372 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0930 10:21:10.500516    8372 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0930 10:21:10.539350    8372 kubeadm.go:310] W0930 10:21:10.538657    1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:21:10.540496    8372 kubeadm.go:310] W0930 10:21:10.539942    1817 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0930 10:21:10.563129    8372 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0930 10:21:10.622791    8372 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0930 10:21:26.483725    8372 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0930 10:21:26.483781    8372 kubeadm.go:310] [preflight] Running pre-flight checks
	I0930 10:21:26.483871    8372 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0930 10:21:26.483928    8372 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0930 10:21:26.483966    8372 kubeadm.go:310] OS: Linux
	I0930 10:21:26.484015    8372 kubeadm.go:310] CGROUPS_CPU: enabled
	I0930 10:21:26.484065    8372 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0930 10:21:26.484115    8372 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0930 10:21:26.484168    8372 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0930 10:21:26.484217    8372 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0930 10:21:26.484270    8372 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0930 10:21:26.484325    8372 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0930 10:21:26.484377    8372 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0930 10:21:26.484426    8372 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0930 10:21:26.484498    8372 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0930 10:21:26.484597    8372 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0930 10:21:26.484687    8372 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0930 10:21:26.484751    8372 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0930 10:21:26.486786    8372 out.go:235]   - Generating certificates and keys ...
	I0930 10:21:26.486876    8372 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0930 10:21:26.486969    8372 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0930 10:21:26.487045    8372 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0930 10:21:26.487121    8372 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0930 10:21:26.487203    8372 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0930 10:21:26.487257    8372 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0930 10:21:26.487312    8372 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0930 10:21:26.487438    8372 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-703944 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:21:26.487507    8372 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0930 10:21:26.487671    8372 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-703944 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0930 10:21:26.487757    8372 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0930 10:21:26.487834    8372 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0930 10:21:26.487884    8372 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0930 10:21:26.487964    8372 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0930 10:21:26.488045    8372 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0930 10:21:26.488134    8372 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0930 10:21:26.488212    8372 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0930 10:21:26.488304    8372 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0930 10:21:26.488383    8372 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0930 10:21:26.488489    8372 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0930 10:21:26.488587    8372 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0930 10:21:26.490439    8372 out.go:235]   - Booting up control plane ...
	I0930 10:21:26.490588    8372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0930 10:21:26.490682    8372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0930 10:21:26.490755    8372 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0930 10:21:26.490858    8372 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0930 10:21:26.490940    8372 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0930 10:21:26.490978    8372 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0930 10:21:26.491109    8372 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0930 10:21:26.491210    8372 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0930 10:21:26.491266    8372 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001640709s
	I0930 10:21:26.491337    8372 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0930 10:21:26.491392    8372 kubeadm.go:310] [api-check] The API server is healthy after 7.001261086s
	I0930 10:21:26.491495    8372 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0930 10:21:26.491643    8372 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0930 10:21:26.491701    8372 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0930 10:21:26.491877    8372 kubeadm.go:310] [mark-control-plane] Marking the node addons-703944 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0930 10:21:26.491932    8372 kubeadm.go:310] [bootstrap-token] Using token: lsod6c.4t1f64okr2pfpgmx
	I0930 10:21:26.494068    8372 out.go:235]   - Configuring RBAC rules ...
	I0930 10:21:26.494248    8372 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0930 10:21:26.494379    8372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0930 10:21:26.494573    8372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0930 10:21:26.494724    8372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0930 10:21:26.494849    8372 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0930 10:21:26.494944    8372 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0930 10:21:26.495073    8372 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0930 10:21:26.495122    8372 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0930 10:21:26.495173    8372 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0930 10:21:26.495180    8372 kubeadm.go:310] 
	I0930 10:21:26.495244    8372 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0930 10:21:26.495253    8372 kubeadm.go:310] 
	I0930 10:21:26.495333    8372 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0930 10:21:26.495340    8372 kubeadm.go:310] 
	I0930 10:21:26.495367    8372 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0930 10:21:26.495436    8372 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0930 10:21:26.495492    8372 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0930 10:21:26.495499    8372 kubeadm.go:310] 
	I0930 10:21:26.495579    8372 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0930 10:21:26.495589    8372 kubeadm.go:310] 
	I0930 10:21:26.495642    8372 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0930 10:21:26.495650    8372 kubeadm.go:310] 
	I0930 10:21:26.495705    8372 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0930 10:21:26.495788    8372 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0930 10:21:26.495864    8372 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0930 10:21:26.495872    8372 kubeadm.go:310] 
	I0930 10:21:26.495961    8372 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0930 10:21:26.496046    8372 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0930 10:21:26.496053    8372 kubeadm.go:310] 
	I0930 10:21:26.496142    8372 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lsod6c.4t1f64okr2pfpgmx \
	I0930 10:21:26.496254    8372 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:745106e23a5f99f7b5cf3f70fc5b7fa08e737936aedd27a5a99b20714a4f1180 \
	I0930 10:21:26.496279    8372 kubeadm.go:310] 	--control-plane 
	I0930 10:21:26.496286    8372 kubeadm.go:310] 
	I0930 10:21:26.496376    8372 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0930 10:21:26.496384    8372 kubeadm.go:310] 
	I0930 10:21:26.496471    8372 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lsod6c.4t1f64okr2pfpgmx \
	I0930 10:21:26.496594    8372 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:745106e23a5f99f7b5cf3f70fc5b7fa08e737936aedd27a5a99b20714a4f1180 
	I0930 10:21:26.496606    8372 cni.go:84] Creating CNI manager for ""
	I0930 10:21:26.496619    8372 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0930 10:21:26.498891    8372 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0930 10:21:26.500878    8372 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0930 10:21:26.509550    8372 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0930 10:21:26.528465    8372 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0930 10:21:26.528552    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:26.528592    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-703944 minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128 minikube.k8s.io/name=addons-703944 minikube.k8s.io/primary=true
	I0930 10:21:26.544102    8372 ops.go:34] apiserver oom_adj: -16
	I0930 10:21:26.673171    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:27.173851    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:27.674211    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:28.174030    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:28.674124    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:29.173214    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:29.674085    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:30.173949    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:30.673776    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:31.173362    8372 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0930 10:21:31.298798    8372 kubeadm.go:1113] duration metric: took 4.770307554s to wait for elevateKubeSystemPrivileges
	I0930 10:21:31.298839    8372 kubeadm.go:394] duration metric: took 20.906025265s to StartCluster
	I0930 10:21:31.298856    8372 settings.go:142] acquiring lock: {Name:mkcf2de35d43f3b73031cab05addbe76685d61d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:31.298979    8372 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19734-2285/kubeconfig
	I0930 10:21:31.299357    8372 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19734-2285/kubeconfig: {Name:mk4ffb7b34cf58f060bd905874f12e785542fb79 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0930 10:21:31.299599    8372 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0930 10:21:31.299750    8372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0930 10:21:31.300001    8372 config.go:182] Loaded profile config "addons-703944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:21:31.300036    8372 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0930 10:21:31.300111    8372 addons.go:69] Setting yakd=true in profile "addons-703944"
	I0930 10:21:31.300128    8372 addons.go:234] Setting addon yakd=true in "addons-703944"
	I0930 10:21:31.300151    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.300640    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.301045    8372 addons.go:69] Setting inspektor-gadget=true in profile "addons-703944"
	I0930 10:21:31.301066    8372 addons.go:234] Setting addon inspektor-gadget=true in "addons-703944"
	I0930 10:21:31.301091    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.301539    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.301695    8372 addons.go:69] Setting metrics-server=true in profile "addons-703944"
	I0930 10:21:31.301721    8372 addons.go:234] Setting addon metrics-server=true in "addons-703944"
	I0930 10:21:31.301814    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.302252    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.304914    8372 addons.go:69] Setting cloud-spanner=true in profile "addons-703944"
	I0930 10:21:31.304943    8372 addons.go:234] Setting addon cloud-spanner=true in "addons-703944"
	I0930 10:21:31.304970    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.305036    8372 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-703944"
	I0930 10:21:31.305054    8372 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-703944"
	I0930 10:21:31.305076    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.305413    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.305488    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.306014    8372 addons.go:69] Setting registry=true in profile "addons-703944"
	I0930 10:21:31.306037    8372 addons.go:234] Setting addon registry=true in "addons-703944"
	I0930 10:21:31.306064    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.306482    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.312882    8372 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-703944"
	I0930 10:21:31.312950    8372 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-703944"
	I0930 10:21:31.312983    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.313450    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.318745    8372 addons.go:69] Setting storage-provisioner=true in profile "addons-703944"
	I0930 10:21:31.318833    8372 addons.go:234] Setting addon storage-provisioner=true in "addons-703944"
	I0930 10:21:31.318895    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.320024    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.330320    8372 addons.go:69] Setting default-storageclass=true in profile "addons-703944"
	I0930 10:21:31.330359    8372 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-703944"
	I0930 10:21:31.330785    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.343868    8372 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-703944"
	I0930 10:21:31.343964    8372 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-703944"
	I0930 10:21:31.345969    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.350237    8372 addons.go:69] Setting gcp-auth=true in profile "addons-703944"
	I0930 10:21:31.382515    8372 mustload.go:65] Loading cluster: addons-703944
	I0930 10:21:31.382766    8372 config.go:182] Loaded profile config "addons-703944": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:21:31.383047    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.366578    8372 addons.go:69] Setting volcano=true in profile "addons-703944"
	I0930 10:21:31.385775    8372 addons.go:234] Setting addon volcano=true in "addons-703944"
	I0930 10:21:31.385817    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.366592    8372 addons.go:69] Setting volumesnapshots=true in profile "addons-703944"
	I0930 10:21:31.366735    8372 out.go:177] * Verifying Kubernetes components...
	I0930 10:21:31.367840    8372 addons.go:69] Setting ingress=true in profile "addons-703944"
	I0930 10:21:31.398536    8372 addons.go:234] Setting addon ingress=true in "addons-703944"
	I0930 10:21:31.398588    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.399115    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.403134    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.367852    8372 addons.go:69] Setting ingress-dns=true in profile "addons-703944"
	I0930 10:21:31.405421    8372 addons.go:234] Setting addon ingress-dns=true in "addons-703944"
	I0930 10:21:31.405517    8372 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0930 10:21:31.405985    8372 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0930 10:21:31.407975    8372 addons.go:234] Setting addon volumesnapshots=true in "addons-703944"
	I0930 10:21:31.408172    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.408732    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.415145    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.421720    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.408063    8372 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0930 10:21:31.425317    8372 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0930 10:21:31.425338    8372 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0930 10:21:31.425445    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.446626    8372 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0930 10:21:31.446657    8372 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0930 10:21:31.446772    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.474558    8372 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.7
	I0930 10:21:31.474932    8372 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0930 10:21:31.475167    8372 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0930 10:21:31.484832    8372 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:31.484895    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0930 10:21:31.484987    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.502194    8372 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0930 10:21:31.502214    8372 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0930 10:21:31.502274    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.503789    8372 out.go:177]   - Using image docker.io/registry:2.8.3
	I0930 10:21:31.503965    8372 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.10.0
	I0930 10:21:31.505428    8372 addons.go:234] Setting addon default-storageclass=true in "addons-703944"
	I0930 10:21:31.505458    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.505868    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.527674    8372 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0930 10:21:31.527710    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0930 10:21:31.527778    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.555321    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.565756    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0930 10:21:31.565843    8372 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.10.0
	I0930 10:21:31.565908    8372 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0930 10:21:31.566321    8372 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0930 10:21:31.566326    8372 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0930 10:21:31.583829    8372 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:31.584222    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0930 10:21:31.584287    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.588565    8372 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:31.588591    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0930 10:21:31.588653    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.620895    8372 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.10.0
	I0930 10:21:31.627002    8372 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:21:31.627033    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (471825 bytes)
	I0930 10:21:31.627116    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.644368    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0930 10:21:31.655758    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0930 10:21:31.659779    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0930 10:21:31.660469    8372 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:31.663614    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0930 10:21:31.665797    8372 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:31.669554    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0930 10:21:31.670782    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0930 10:21:31.671236    8372 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:21:31.671698    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0930 10:21:31.671819    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.693114    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0930 10:21:31.696548    8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0930 10:21:31.697114    8372 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0930 10:21:31.697204    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.671269    8372 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0930 10:21:31.704274    8372 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:21:31.704295    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0930 10:21:31.704429    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.717088    8372 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0930 10:21:31.719027    8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0930 10:21:31.719049    8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0930 10:21:31.719132    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.725191    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.727275    8372 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-703944"
	I0930 10:21:31.727316    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:31.728473    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:31.751664    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.764668    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.766406    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.766970    8372 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:31.767041    8372 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0930 10:21:31.767111    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.792312    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.844010    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.845937    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.850552    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.896841    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.901318    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.909904    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.913702    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.917842    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:31.919677    8372 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0930 10:21:31.921500    8372 out.go:177]   - Using image docker.io/busybox:stable
	I0930 10:21:31.923622    8372 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:31.923642    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0930 10:21:31.923708    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:31.959169    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:32.398712    8372 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.098929353s)
	I0930 10:21:32.398847    8372 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0930 10:21:32.399007    8372 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0930 10:21:32.657942    8372 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0930 10:21:32.657970    8372 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0930 10:21:32.659636    8372 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0930 10:21:32.659658    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0930 10:21:32.758143    8372 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0930 10:21:32.758167    8372 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0930 10:21:32.871192    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0930 10:21:32.883632    8372 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0930 10:21:32.883656    8372 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0930 10:21:32.894807    8372 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0930 10:21:32.894845    8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0930 10:21:32.901370    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0930 10:21:32.920056    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0930 10:21:32.969056    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0930 10:21:32.977060    8372 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0930 10:21:32.977084    8372 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0930 10:21:32.981476    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0930 10:21:32.987439    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0930 10:21:33.057364    8372 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0930 10:21:33.057389    8372 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0930 10:21:33.065849    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0930 10:21:33.069139    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0930 10:21:33.091688    8372 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0930 10:21:33.091713    8372 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0930 10:21:33.095095    8372 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0930 10:21:33.095126    8372 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0930 10:21:33.152667    8372 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:33.152692    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0930 10:21:33.159271    8372 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0930 10:21:33.159294    8372 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0930 10:21:33.188320    8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0930 10:21:33.188345    8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0930 10:21:33.284080    8372 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:33.284105    8372 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0930 10:21:33.291865    8372 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0930 10:21:33.291893    8372 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0930 10:21:33.382333    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0930 10:21:33.384878    8372 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0930 10:21:33.384933    8372 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0930 10:21:33.388552    8372 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0930 10:21:33.388600    8372 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0930 10:21:33.524691    8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0930 10:21:33.524753    8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0930 10:21:33.618633    8372 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0930 10:21:33.618709    8372 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0930 10:21:33.663230    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0930 10:21:33.665128    8372 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0930 10:21:33.665197    8372 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0930 10:21:33.697269    8372 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:33.697345    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0930 10:21:33.761523    8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0930 10:21:33.761611    8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0930 10:21:33.913196    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0930 10:21:33.982595    8372 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0930 10:21:33.982623    8372 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0930 10:21:34.023134    8372 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:34.023158    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0930 10:21:34.075580    8372 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0930 10:21:34.075605    8372 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0930 10:21:34.218980    8372 addons.go:431] installing /etc/kubernetes/addons/ig-configmap.yaml
	I0930 10:21:34.219004    8372 ssh_runner.go:362] scp inspektor-gadget/ig-configmap.yaml --> /etc/kubernetes/addons/ig-configmap.yaml (754 bytes)
	I0930 10:21:34.408594    8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0930 10:21:34.408617    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0930 10:21:34.439059    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:34.654541    8372 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.255481607s)
	I0930 10:21:34.654571    8372 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0930 10:21:34.654631    8372 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.25568396s)
	I0930 10:21:34.655505    8372 node_ready.go:35] waiting up to 6m0s for node "addons-703944" to be "Ready" ...
	I0930 10:21:34.659936    8372 node_ready.go:49] node "addons-703944" has status "Ready":"True"
	I0930 10:21:34.659961    8372 node_ready.go:38] duration metric: took 4.425472ms for node "addons-703944" to be "Ready" ...
	I0930 10:21:34.659973    8372 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:21:34.673234    8372 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:34.820538    8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0930 10:21:34.820564    8372 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0930 10:21:34.866838    8372 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0930 10:21:34.866866    8372 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0930 10:21:35.159508    8372 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-703944" context rescaled to 1 replicas
	I0930 10:21:35.171056    8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0930 10:21:35.171137    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0930 10:21:35.255473    8372 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:35.255563    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (8196 bytes)
	I0930 10:21:35.426546    8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0930 10:21:35.426613    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0930 10:21:35.491685    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0930 10:21:35.605009    8372 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:35.605080    8372 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0930 10:21:36.489253    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0930 10:21:36.699285    8372 pod_ready.go:103] pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:38.576505    8372 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0930 10:21:38.576617    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:38.603925    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:39.179284    8372 pod_ready.go:103] pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:39.531293    8372 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0930 10:21:39.773675    8372 addons.go:234] Setting addon gcp-auth=true in "addons-703944"
	I0930 10:21:39.773770    8372 host.go:66] Checking if "addons-703944" exists ...
	I0930 10:21:39.774246    8372 cli_runner.go:164] Run: docker container inspect addons-703944 --format={{.State.Status}}
	I0930 10:21:39.801168    8372 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0930 10:21:39.801224    8372 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-703944
	I0930 10:21:39.826415    8372 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/addons-703944/id_rsa Username:docker}
	I0930 10:21:41.176006    8372 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-h47vt" not found
	I0930 10:21:41.176079    8372 pod_ready.go:82] duration metric: took 6.502809994s for pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace to be "Ready" ...
	E0930 10:21:41.176104    8372 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-h47vt" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-h47vt" not found
	I0930 10:21:41.176202    8372 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-whncm" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.180696    8372 pod_ready.go:93] pod "coredns-7c65d6cfc9-whncm" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.180786    8372 pod_ready.go:82] duration metric: took 4.553988ms for pod "coredns-7c65d6cfc9-whncm" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.180818    8372 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-703944" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.184950    8372 pod_ready.go:93] pod "etcd-addons-703944" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.185013    8372 pod_ready.go:82] duration metric: took 4.158753ms for pod "etcd-addons-703944" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.185037    8372 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-703944" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.192708    8372 pod_ready.go:93] pod "kube-apiserver-addons-703944" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.192775    8372 pod_ready.go:82] duration metric: took 7.717973ms for pod "kube-apiserver-addons-703944" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.192806    8372 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-703944" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.197260    8372 pod_ready.go:93] pod "kube-controller-manager-addons-703944" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.197284    8372 pod_ready.go:82] duration metric: took 4.446289ms for pod "kube-controller-manager-addons-703944" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.197294    8372 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xl4mj" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.394941    8372 pod_ready.go:93] pod "kube-proxy-xl4mj" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.395009    8372 pod_ready.go:82] duration metric: took 197.707672ms for pod "kube-proxy-xl4mj" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.395037    8372 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-703944" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.777931    8372 pod_ready.go:93] pod "kube-scheduler-addons-703944" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:41.777996    8372 pod_ready.go:82] duration metric: took 382.937765ms for pod "kube-scheduler-addons-703944" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:41.778031    8372 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:43.813374    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (10.942145722s)
	I0930 10:21:43.813562    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (10.912172546s)
	I0930 10:21:43.813577    8372 addons.go:475] Verifying addon ingress=true in "addons-703944"
	I0930 10:21:43.813614    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (10.893519068s)
	I0930 10:21:43.813662    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (10.844583889s)
	I0930 10:21:43.813712    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (10.832216838s)
	I0930 10:21:43.813902    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (10.826442044s)
	I0930 10:21:43.814014    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (10.748141301s)
	I0930 10:21:43.814050    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (10.744891426s)
	I0930 10:21:43.814082    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (10.431691049s)
	I0930 10:21:43.814091    8372 addons.go:475] Verifying addon registry=true in "addons-703944"
	I0930 10:21:43.814414    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (10.151092694s)
	I0930 10:21:43.814445    8372 addons.go:475] Verifying addon metrics-server=true in "addons-703944"
	I0930 10:21:43.814486    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (9.901264855s)
	I0930 10:21:43.814765    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (9.375677644s)
	W0930 10:21:43.814805    8372 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:21:43.814827    8372 retry.go:31] will retry after 166.069544ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0930 10:21:43.814907    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-configmap.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.323148215s)
	I0930 10:21:43.816237    8372 out.go:177] * Verifying ingress addon...
	I0930 10:21:43.817137    8372 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-703944 service yakd-dashboard -n yakd-dashboard
	
	I0930 10:21:43.817141    8372 out.go:177] * Verifying registry addon...
	I0930 10:21:43.820257    8372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0930 10:21:43.821244    8372 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0930 10:21:43.849106    8372 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0930 10:21:43.849183    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:43.850206    8372 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0930 10:21:43.854698    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W0930 10:21:43.867076    8372 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0930 10:21:43.886706    8372 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:43.981460    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0930 10:21:44.363424    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:44.364717    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:44.855224    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:44.855943    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:44.929927    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.440582076s)
	I0930 10:21:44.929957    8372 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-703944"
	I0930 10:21:44.929968    8372 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.128775032s)
	I0930 10:21:44.932752    8372 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0930 10:21:44.932870    8372 out.go:177] * Verifying csi-hostpath-driver addon...
	I0930 10:21:44.935869    8372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0930 10:21:44.937707    8372 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0930 10:21:44.940093    8372 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0930 10:21:44.940118    8372 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0930 10:21:44.951785    8372 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0930 10:21:44.951864    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:45.058118    8372 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0930 10:21:45.058191    8372 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0930 10:21:45.117716    8372 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:45.117792    8372 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0930 10:21:45.168871    8372 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0930 10:21:45.327408    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:45.327689    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:45.441536    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:45.827211    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:45.828039    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:45.941179    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:46.035244    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.053738796s)
	I0930 10:21:46.284559    8372 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:46.330313    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:46.332565    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:46.449123    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:46.495807    8372 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.326848386s)
	I0930 10:21:46.498689    8372 addons.go:475] Verifying addon gcp-auth=true in "addons-703944"
	I0930 10:21:46.501624    8372 out.go:177] * Verifying gcp-auth addon...
	I0930 10:21:46.505155    8372 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0930 10:21:46.545326    8372 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:21:46.823669    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:46.826154    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:46.942058    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:47.327518    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:47.328967    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:47.442381    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:47.825525    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:47.826138    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:47.941270    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:48.324233    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:48.326193    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:48.440663    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:48.783921    8372 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace has status "Ready":"False"
	I0930 10:21:48.826265    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:48.827093    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:48.941352    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:49.284537    8372 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace has status "Ready":"True"
	I0930 10:21:49.284558    8372 pod_ready.go:82] duration metric: took 7.50650461s for pod "nvidia-device-plugin-daemonset-ftwnl" in "kube-system" namespace to be "Ready" ...
	I0930 10:21:49.284568    8372 pod_ready.go:39] duration metric: took 14.624551967s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0930 10:21:49.284586    8372 api_server.go:52] waiting for apiserver process to appear ...
	I0930 10:21:49.284645    8372 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:21:49.303939    8372 api_server.go:72] duration metric: took 18.004303838s to wait for apiserver process to appear ...
	I0930 10:21:49.303964    8372 api_server.go:88] waiting for apiserver healthz status ...
	I0930 10:21:49.303986    8372 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0930 10:21:49.311528    8372 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0930 10:21:49.312569    8372 api_server.go:141] control plane version: v1.31.1
	I0930 10:21:49.312593    8372 api_server.go:131] duration metric: took 8.621856ms to wait for apiserver health ...
	I0930 10:21:49.312606    8372 system_pods.go:43] waiting for kube-system pods to appear ...
	I0930 10:21:49.322110    8372 system_pods.go:59] 17 kube-system pods found
	I0930 10:21:49.322146    8372 system_pods.go:61] "coredns-7c65d6cfc9-whncm" [46a80f84-c5a3-4343-a13b-c43c9e972bea] Running
	I0930 10:21:49.322157    8372 system_pods.go:61] "csi-hostpath-attacher-0" [fae93b3c-422c-4801-b3b3-e2abfa21edfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 10:21:49.322165    8372 system_pods.go:61] "csi-hostpath-resizer-0" [73ed8b8e-3373-4fd3-9185-afb6c7da7d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 10:21:49.322174    8372 system_pods.go:61] "csi-hostpathplugin-k6tp6" [cbada5b7-306c-4194-a282-af2298bf3ca0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 10:21:49.322185    8372 system_pods.go:61] "etcd-addons-703944" [509b84cb-de0e-4191-bfc1-11eca5bf513c] Running
	I0930 10:21:49.322193    8372 system_pods.go:61] "kube-apiserver-addons-703944" [99576047-d72e-4965-b471-24c7cc8754ed] Running
	I0930 10:21:49.322206    8372 system_pods.go:61] "kube-controller-manager-addons-703944" [78f7b3b2-6425-4493-a8a2-8638fa09817d] Running
	I0930 10:21:49.322213    8372 system_pods.go:61] "kube-ingress-dns-minikube" [c9a50869-6b2b-4991-8768-56022a305760] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0930 10:21:49.322222    8372 system_pods.go:61] "kube-proxy-xl4mj" [a24923c1-7646-42d3-a132-c59589ed9310] Running
	I0930 10:21:49.322227    8372 system_pods.go:61] "kube-scheduler-addons-703944" [c83f2380-e6dc-48f9-8d9b-588f3bc7fa34] Running
	I0930 10:21:49.322233    8372 system_pods.go:61] "metrics-server-84c5f94fbc-72src" [2328d76c-f121-44e6-894a-b82153cbb0b7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 10:21:49.322240    8372 system_pods.go:61] "nvidia-device-plugin-daemonset-ftwnl" [8b10a7e7-ec39-4b16-8d9f-33979a0e6e8d] Running
	I0930 10:21:49.322246    8372 system_pods.go:61] "registry-66c9cd494c-rdvzj" [1071ed50-a346-48af-bd60-fb6e526e1d58] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0930 10:21:49.322252    8372 system_pods.go:61] "registry-proxy-ggxvp" [a0c7860c-3f6b-40f2-9761-cd6466b5e812] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 10:21:49.322259    8372 system_pods.go:61] "snapshot-controller-56fcc65765-kth5m" [d7c3897c-dd10-4c1e-a9ce-f2691e7f1c92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:49.322268    8372 system_pods.go:61] "snapshot-controller-56fcc65765-pssjz" [a7de780e-cc75-4d90-9860-be9d0ba459d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:49.322274    8372 system_pods.go:61] "storage-provisioner" [d9f9be36-ec15-42fe-ae1c-03e9bd9fbd83] Running
	I0930 10:21:49.322287    8372 system_pods.go:74] duration metric: took 9.674084ms to wait for pod list to return data ...
	I0930 10:21:49.322293    8372 default_sa.go:34] waiting for default service account to be created ...
	I0930 10:21:49.327243    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:49.328174    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:49.328883    8372 default_sa.go:45] found service account: "default"
	I0930 10:21:49.328939    8372 default_sa.go:55] duration metric: took 6.636242ms for default service account to be created ...
	I0930 10:21:49.328963    8372 system_pods.go:116] waiting for k8s-apps to be running ...
	I0930 10:21:49.338613    8372 system_pods.go:86] 17 kube-system pods found
	I0930 10:21:49.338652    8372 system_pods.go:89] "coredns-7c65d6cfc9-whncm" [46a80f84-c5a3-4343-a13b-c43c9e972bea] Running
	I0930 10:21:49.338723    8372 system_pods.go:89] "csi-hostpath-attacher-0" [fae93b3c-422c-4801-b3b3-e2abfa21edfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0930 10:21:49.338737    8372 system_pods.go:89] "csi-hostpath-resizer-0" [73ed8b8e-3373-4fd3-9185-afb6c7da7d5d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0930 10:21:49.338746    8372 system_pods.go:89] "csi-hostpathplugin-k6tp6" [cbada5b7-306c-4194-a282-af2298bf3ca0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0930 10:21:49.338751    8372 system_pods.go:89] "etcd-addons-703944" [509b84cb-de0e-4191-bfc1-11eca5bf513c] Running
	I0930 10:21:49.338757    8372 system_pods.go:89] "kube-apiserver-addons-703944" [99576047-d72e-4965-b471-24c7cc8754ed] Running
	I0930 10:21:49.338762    8372 system_pods.go:89] "kube-controller-manager-addons-703944" [78f7b3b2-6425-4493-a8a2-8638fa09817d] Running
	I0930 10:21:49.338770    8372 system_pods.go:89] "kube-ingress-dns-minikube" [c9a50869-6b2b-4991-8768-56022a305760] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0930 10:21:49.338774    8372 system_pods.go:89] "kube-proxy-xl4mj" [a24923c1-7646-42d3-a132-c59589ed9310] Running
	I0930 10:21:49.338780    8372 system_pods.go:89] "kube-scheduler-addons-703944" [c83f2380-e6dc-48f9-8d9b-588f3bc7fa34] Running
	I0930 10:21:49.338802    8372 system_pods.go:89] "metrics-server-84c5f94fbc-72src" [2328d76c-f121-44e6-894a-b82153cbb0b7] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0930 10:21:49.338813    8372 system_pods.go:89] "nvidia-device-plugin-daemonset-ftwnl" [8b10a7e7-ec39-4b16-8d9f-33979a0e6e8d] Running
	I0930 10:21:49.338819    8372 system_pods.go:89] "registry-66c9cd494c-rdvzj" [1071ed50-a346-48af-bd60-fb6e526e1d58] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0930 10:21:49.338825    8372 system_pods.go:89] "registry-proxy-ggxvp" [a0c7860c-3f6b-40f2-9761-cd6466b5e812] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0930 10:21:49.338832    8372 system_pods.go:89] "snapshot-controller-56fcc65765-kth5m" [d7c3897c-dd10-4c1e-a9ce-f2691e7f1c92] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:49.338842    8372 system_pods.go:89] "snapshot-controller-56fcc65765-pssjz" [a7de780e-cc75-4d90-9860-be9d0ba459d5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0930 10:21:49.338846    8372 system_pods.go:89] "storage-provisioner" [d9f9be36-ec15-42fe-ae1c-03e9bd9fbd83] Running
	I0930 10:21:49.338854    8372 system_pods.go:126] duration metric: took 9.879833ms to wait for k8s-apps to be running ...
	I0930 10:21:49.338875    8372 system_svc.go:44] waiting for kubelet service to be running ....
	I0930 10:21:49.338950    8372 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:21:49.350950    8372 system_svc.go:56] duration metric: took 12.07497ms WaitForService to wait for kubelet
	I0930 10:21:49.350985    8372 kubeadm.go:582] duration metric: took 18.051354972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0930 10:21:49.351004    8372 node_conditions.go:102] verifying NodePressure condition ...
	I0930 10:21:49.355053    8372 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0930 10:21:49.355085    8372 node_conditions.go:123] node cpu capacity is 2
	I0930 10:21:49.355098    8372 node_conditions.go:105] duration metric: took 4.089593ms to run NodePressure ...
	I0930 10:21:49.355110    8372 start.go:241] waiting for startup goroutines ...
	I0930 10:21:49.441388    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:49.823847    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:49.826042    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:49.940655    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:50.325159    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:50.326405    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:50.440986    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:50.827343    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:50.829085    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:50.941775    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:51.324602    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:51.326132    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:51.440815    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:51.826145    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:51.827362    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:51.941728    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:52.325271    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:52.326138    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:52.440533    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:52.826198    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:52.827122    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:52.941249    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:53.324058    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:53.325347    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:53.441370    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:53.823442    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:53.826606    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:53.940994    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:54.326199    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:54.327427    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:54.441969    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:54.825387    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:54.825878    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:54.940229    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.324856    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:55.329490    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:55.440995    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:55.823861    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:55.824971    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:55.941194    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:56.324753    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:56.327070    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:56.441453    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:56.824285    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:56.826107    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:56.941264    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:57.325323    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:57.326200    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:57.440961    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:57.826247    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:57.826566    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:57.940763    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:58.325200    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:58.326227    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:58.440585    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:58.824887    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:58.826237    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:58.940599    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:59.324219    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:59.326410    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:59.441800    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:21:59.824953    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:21:59.826190    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:21:59.940908    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:00.332529    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:00.333393    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:00.441474    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:00.838704    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:00.839640    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:00.941564    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:01.325477    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:01.326401    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:01.441049    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:01.825689    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:01.826905    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:01.940753    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:02.326385    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:02.328244    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:02.440035    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:02.824940    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:02.827062    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:02.940660    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:03.325236    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:03.326119    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:03.441200    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:03.827566    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0930 10:22:03.828407    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:03.941029    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:04.326220    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:04.326860    8372 kapi.go:107] duration metric: took 20.506605548s to wait for kubernetes.io/minikube-addons=registry ...
	I0930 10:22:04.440317    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:04.849977    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:04.941416    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:05.330212    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:05.441204    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:05.836535    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:05.942229    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:06.326491    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:06.441729    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:06.827023    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:06.941483    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.325877    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:07.440663    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:07.828721    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:07.940680    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:08.325396    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:08.440677    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:08.825686    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:08.941182    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:09.325899    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:09.441415    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:09.826041    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:09.941331    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:10.326220    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:10.442338    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:10.825571    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:10.941491    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:11.325783    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:11.440343    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:11.826929    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:11.941467    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:12.331713    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:12.441737    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:12.826124    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:12.941172    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:13.336469    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:13.441082    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:13.833734    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:13.941597    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:14.326172    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:14.445706    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:14.825427    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:14.940912    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:15.326618    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:15.441450    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:15.827754    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:15.941728    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:16.326148    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:16.440890    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:16.827726    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:16.941317    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:17.326016    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:17.441447    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:17.831594    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:17.941182    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:18.325022    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:18.440673    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:18.828704    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:18.942197    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:19.332454    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:19.440801    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:19.826573    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:19.940746    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:20.325197    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:20.440818    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:20.825775    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:20.949086    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:21.326660    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:21.442140    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:21.828582    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:21.941007    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:22.326434    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:22.440607    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:22.829341    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:22.941307    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:23.326986    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:23.445145    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:23.826927    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:23.940468    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:24.325541    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:24.440962    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:24.825602    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:24.941001    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:25.325805    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:25.440403    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:25.825868    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:25.940379    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:26.326757    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:26.440140    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:26.826132    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:26.942492    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:27.325914    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:27.441059    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:27.827179    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:27.940723    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:28.325997    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:28.441985    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:28.828752    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:28.941630    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:29.326437    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:29.441368    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:29.825212    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:29.940566    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:30.326029    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:30.443704    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:30.825630    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:30.940680    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:31.326280    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:31.440716    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:31.828490    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:31.941739    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0930 10:22:32.325894    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:32.440375    8372 kapi.go:107] duration metric: took 47.504506187s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0930 10:22:32.825680    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:33.335363    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:33.825391    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:34.326781    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:34.825960    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:35.326134    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:35.830381    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:36.325610    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:36.825857    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:37.325316    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:37.826609    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:38.325891    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:38.825773    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:39.325961    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:39.825937    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:40.325731    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:40.824963    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:41.326372    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:41.825528    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:42.325423    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:42.825271    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:43.326215    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:43.826600    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:44.325376    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:44.825846    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:45.327210    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:45.826193    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:46.326138    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:46.825637    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:47.326570    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:47.826190    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:48.324963    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:48.825086    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:49.325397    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:49.826049    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:50.326323    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:50.825378    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:51.326704    8372 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0930 10:22:51.846862    8372 kapi.go:107] duration metric: took 1m8.025610103s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0930 10:23:09.530064    8372 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0930 10:23:09.530091    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:10.008443    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:10.509519    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:11.012959    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:11.508671    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:12.008346    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:12.509288    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:13.009254    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:13.509245    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:14.009187    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:14.508450    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:15.009933    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:15.508584    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:16.009327    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:16.509357    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:17.008862    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:17.508535    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:18.008661    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:18.508635    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:19.008739    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:19.508699    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:20.009393    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:20.509067    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:21.009019    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:21.508624    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:22.009027    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:22.509077    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:23.008704    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:23.508307    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:24.009434    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:24.509274    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:25.009210    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:25.508590    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:26.009303    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:26.508719    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:27.008809    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:27.508685    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:28.009839    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:28.508979    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:29.008501    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:29.508723    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:30.009361    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:30.509590    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:31.009070    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:31.508492    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:32.008485    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:32.509146    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:33.008457    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:33.508938    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:34.008619    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:34.509031    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:35.008897    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:35.511641    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:36.009524    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:36.508444    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:37.009305    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:37.508978    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:38.008984    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:38.509635    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:39.008640    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:39.508364    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:40.008716    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:40.508142    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:41.008385    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:41.509192    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:42.008675    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:42.508294    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:43.008944    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:43.508511    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:44.009478    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:44.508765    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:45.009523    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:45.509745    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:46.009251    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:46.508186    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:47.009024    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:47.508433    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:48.009108    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:48.509353    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:49.009333    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:49.508514    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:50.010503    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:50.509337    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:51.008575    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:51.509735    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:52.009459    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:52.508905    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:53.008708    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:53.509213    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:54.009535    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:54.509276    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:55.009115    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:55.510790    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:56.011378    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:56.508862    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:57.008439    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:57.509378    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:58.008716    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:58.508739    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:59.008642    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:23:59.509198    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:00.011907    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:00.508311    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:01.008911    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:01.508566    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:02.010121    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:02.508453    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:03.009046    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:03.508540    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:04.008432    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:04.508909    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:05.009330    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:05.508470    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:06.009316    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:06.508766    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:07.007991    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:07.508723    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:08.008934    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:08.509797    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:09.008388    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:09.510026    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:10.009073    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:10.508590    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:11.009408    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:11.508784    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:12.008683    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:12.508801    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:13.009096    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:13.508367    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:14.009427    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:14.509264    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:15.019060    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:15.508791    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:16.013116    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:16.508264    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:17.009352    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:17.509284    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:18.010118    8372 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0930 10:24:18.509041    8372 kapi.go:107] duration metric: took 2m32.00388476s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0930 10:24:18.511665    8372 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-703944 cluster.
	I0930 10:24:18.514452    8372 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0930 10:24:18.516892    8372 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0930 10:24:18.518744    8372 out.go:177] * Enabled addons: volcano, nvidia-device-plugin, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0930 10:24:18.520508    8372 addons.go:510] duration metric: took 2m47.220466078s for enable addons: enabled=[volcano nvidia-device-plugin cloud-spanner storage-provisioner ingress-dns metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0930 10:24:18.520569    8372 start.go:246] waiting for cluster config update ...
	I0930 10:24:18.520597    8372 start.go:255] writing updated cluster config ...
	I0930 10:24:18.520889    8372 ssh_runner.go:195] Run: rm -f paused
	I0930 10:24:18.845340    8372 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0930 10:24:18.847736    8372 out.go:177] * Done! kubectl is now configured to use "addons-703944" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 30 10:33:49 addons-703944 dockerd[1288]: time="2024-09-30T10:33:49.530275943Z" level=info msg="ignoring event" container=b01247b84ab8a9df4b46e494d1f77dd0dbf2c5926a31ae9e2cc811b02838c544 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:49 addons-703944 dockerd[1288]: time="2024-09-30T10:33:49.672394655Z" level=info msg="ignoring event" container=fd4f358ae1829e2bd243d474b8171777e310986306967fea9a63228dbe11aa93 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:49 addons-703944 dockerd[1288]: time="2024-09-30T10:33:49.724696676Z" level=info msg="ignoring event" container=20c26c82689fcb72554b438b52b5e1a578bef0ab822a0096123a1918df0bb8ce module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:56 addons-703944 dockerd[1288]: time="2024-09-30T10:33:56.267493316Z" level=info msg="ignoring event" container=a5864276b4f3d638ed913defe38c88eb8b6590deb2d3c1b1564168723aa9a8b4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:56 addons-703944 dockerd[1288]: time="2024-09-30T10:33:56.417644275Z" level=info msg="ignoring event" container=c5e388071a29f6149e9e1bd1495739173a415a09542cf5a28f880736bbdee644 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:33:57 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:33:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ebd48ad77ad7af597184897923c29db8fc520cd616b26dced6b371ae0befcb5/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 30 10:33:57 addons-703944 dockerd[1288]: time="2024-09-30T10:33:57.302079247Z" level=warning msg="reference for unknown type: " digest="sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" remote="docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" spanID=4c8efa3e33433a7f traceID=09b40a6d890d1086716807a4bbe31f4b
	Sep 30 10:33:57 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:33:57Z" level=info msg="Stop pulling image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79: Status: Downloaded newer image for busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79"
	Sep 30 10:33:58 addons-703944 dockerd[1288]: time="2024-09-30T10:33:58.020613476Z" level=info msg="ignoring event" container=625ac88b7fd165338ab8fdccbfc4cd1b244052dd26eeb9d4da58d01e052acc84 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:00 addons-703944 dockerd[1288]: time="2024-09-30T10:34:00.058647495Z" level=info msg="Attempting next endpoint for pull after error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a26cf2d551a02c88 traceID=3eee85ae8207c459dda9bd736a893e4b
	Sep 30 10:34:00 addons-703944 dockerd[1288]: time="2024-09-30T10:34:00.062362789Z" level=error msg="Handler for POST /v1.43/images/create returned error: Head \"https://gcr.io/v2/k8s-minikube/busybox/manifests/latest\": unauthorized: authentication failed" spanID=a26cf2d551a02c88 traceID=3eee85ae8207c459dda9bd736a893e4b
	Sep 30 10:34:00 addons-703944 dockerd[1288]: time="2024-09-30T10:34:00.157282436Z" level=info msg="ignoring event" container=3ebd48ad77ad7af597184897923c29db8fc520cd616b26dced6b371ae0befcb5 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:02 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:34:02Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2713756d0b4ea1ce2193817f5964ca2034aaba39f2a28fa26666b182a21b13c6/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 30 10:34:02 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:34:02Z" level=info msg="Stop pulling image busybox:stable: Status: Downloaded newer image for busybox:stable"
	Sep 30 10:34:02 addons-703944 dockerd[1288]: time="2024-09-30T10:34:02.951263898Z" level=info msg="ignoring event" container=580672e782c5bd5a16a4318b576d4298676fb385fc0e78e57e4f9b9e9bfd9ba9 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:04 addons-703944 dockerd[1288]: time="2024-09-30T10:34:04.300651020Z" level=info msg="ignoring event" container=2713756d0b4ea1ce2193817f5964ca2034aaba39f2a28fa26666b182a21b13c6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:05 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:34:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ac43b835fe5c9ab9556c5d49ad01846b174068ec13b28cfb27541adf9de723c/resolv.conf as [nameserver 10.96.0.10 search local-path-storage.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 30 10:34:06 addons-703944 dockerd[1288]: time="2024-09-30T10:34:06.189085606Z" level=info msg="ignoring event" container=a602b281f0c43351f13dcabf9760187c497ff3580de5377efed475dd2a3a811f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:07 addons-703944 dockerd[1288]: time="2024-09-30T10:34:07.364642423Z" level=info msg="ignoring event" container=3ac43b835fe5c9ab9556c5d49ad01846b174068ec13b28cfb27541adf9de723c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:15 addons-703944 dockerd[1288]: time="2024-09-30T10:34:15.526430638Z" level=info msg="ignoring event" container=9d8a251cdc182765f1b6afefb4ef602279f9a8002dc2a359d7ff4cff4d610403 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:16 addons-703944 dockerd[1288]: time="2024-09-30T10:34:16.222330179Z" level=info msg="ignoring event" container=8ff34a9a05ef2b99e0385cd38068272c1da8ac2ac4042e5f40e6d69ea7e24829 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:16 addons-703944 dockerd[1288]: time="2024-09-30T10:34:16.278810636Z" level=info msg="ignoring event" container=4d968f7e8c938ada722f733a6fcef97b5da7b2c4fdba2828ae041467ae711d62 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:16 addons-703944 dockerd[1288]: time="2024-09-30T10:34:16.498128732Z" level=info msg="ignoring event" container=7ef5b486717bc51d996fab293bd8cfac2a52478290cb639fd105a2f59f7989f2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 30 10:34:16 addons-703944 cri-dockerd[1546]: time="2024-09-30T10:34:16Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"registry-66c9cd494c-rdvzj_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7ef5b486717bc51d996fab293bd8cfac2a52478290cb639fd105a2f59f7989f2\""
	Sep 30 10:34:16 addons-703944 dockerd[1288]: time="2024-09-30T10:34:16.609641796Z" level=info msg="ignoring event" container=dab57879dd7e2b105b50f69dd335cdc41c0f6b44ac15bc924eb44794da721f30 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a602b281f0c43       fc9db2894f4e4                                                                                                                11 seconds ago      Exited              helper-pod                0                   3ac43b835fe5c       helper-pod-delete-pvc-c80c0af4-a393-4a05-9c4d-cc7ecf4f0af4
	580672e782c5b       busybox@sha256:c230832bd3b0be59a6c47ed64294f9ce71e91b327957920b6929a0caa8353140                                              15 seconds ago      Exited              busybox                   0                   2713756d0b4ea       test-local-path
	625ac88b7fd16       busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79                                              20 seconds ago      Exited              helper-pod                0                   3ebd48ad77ad7       helper-pod-create-pvc-c80c0af4-a393-4a05-9c4d-cc7ecf4f0af4
	e56e6fc59f851       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                 10 minutes ago      Running             gcp-auth                  0                   8b9832f30788a       gcp-auth-89d5ffd79-qbk9q
	b500e50fd74d0       registry.k8s.io/ingress-nginx/controller@sha256:d5f8217feeac4887cb1ed21f27c2674e58be06bd8f5184cacea2a69abaf78dce             11 minutes ago      Running             controller                0                   e84d22f08f9c9       ingress-nginx-controller-bc57996ff-fhcnz
	7ff05e7f77cd3       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   12 minutes ago      Exited              patch                     0                   d611ed1c8740f       ingress-nginx-admission-patch-gwx9r
	4d3c2a6042618       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a320a50cc91bd15fd2d6fa6de58bd98c1bd64b9a6f926ce23a600d87043455a3   12 minutes ago      Exited              create                    0                   ba84e1feed742       ingress-nginx-admission-create-9prwn
	b124d58586438       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec            12 minutes ago      Running             gadget                    0                   e230ab8541b09       gadget-7txl9
	cc02ee2dc585e       rancher/local-path-provisioner@sha256:e34c88ae0affb1cdefbb874140d6339d4a27ec4ee420ae8199cd839997b05246                       12 minutes ago      Running             local-path-provisioner    0                   23482939aa7e5       local-path-provisioner-86d989889c-2w5jv
	749c096625179       registry.k8s.io/metrics-server/metrics-server@sha256:ffcb2bf004d6aa0a17d90e0247cf94f2865c8901dcab4427034c341951c239f9        12 minutes ago      Running             metrics-server            0                   30e12a384193e       metrics-server-84c5f94fbc-72src
	4d968f7e8c938       gcr.io/k8s-minikube/kube-registry-proxy@sha256:9fd683b2e47c5fded3410c69f414f05cdee737597569f52854347f889b118982              12 minutes ago      Exited              registry-proxy            0                   dab57879dd7e2       registry-proxy-ggxvp
	8ff34a9a05ef2       registry@sha256:ac0192b549007e22998eb74e8d8488dcfe70f1489520c3b144a6047ac5efbe90                                             12 minutes ago      Exited              registry                  0                   7ef5b486717bc       registry-66c9cd494c-rdvzj
	7701316bb1b7d       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             12 minutes ago      Running             minikube-ingress-dns      0                   b063e0b2bb844       kube-ingress-dns-minikube
	f0dc82196b031       gcr.io/cloud-spanner-emulator/emulator@sha256:f78b14fe7e4632fc0b3c65e15101ebbbcf242857de9851d3c0baea94bd269b5e               12 minutes ago      Running             cloud-spanner-emulator    0                   e3cde4c3830b7       cloud-spanner-emulator-5b584cc74-zl2c5
	9b7883641a7b6       ba04bb24b9575                                                                                                                12 minutes ago      Running             storage-provisioner       0                   51a83c3cfcd39       storage-provisioner
	471d0bb84337c       2f6c962e7b831                                                                                                                12 minutes ago      Running             coredns                   0                   a4f793993c5fd       coredns-7c65d6cfc9-whncm
	cf5b880fad343       24a140c548c07                                                                                                                12 minutes ago      Running             kube-proxy                0                   221070f364cb4       kube-proxy-xl4mj
	8fe61e0b6c18a       279f381cb3736                                                                                                                12 minutes ago      Running             kube-controller-manager   0                   6c49a86599975       kube-controller-manager-addons-703944
	32399c9ffe928       27e3830e14027                                                                                                                12 minutes ago      Running             etcd                      0                   8ed76639343d4       etcd-addons-703944
	bd7f169d8e3e5       d3f53a98c0a9d                                                                                                                12 minutes ago      Running             kube-apiserver            0                   b374e3e99a214       kube-apiserver-addons-703944
	9f4afc2251bd6       7f8aa378bb47d                                                                                                                12 minutes ago      Running             kube-scheduler            0                   a91275cb4ae88       kube-scheduler-addons-703944
	
	
	==> controller_ingress [b500e50fd74d] <==
	W0930 10:22:51.031771       6 client_config.go:659] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I0930 10:22:51.031919       6 main.go:205] "Creating API client" host="https://10.96.0.1:443"
	I0930 10:22:51.040996       6 main.go:248] "Running in Kubernetes cluster" major="1" minor="31" git="v1.31.1" state="clean" commit="948afe5ca072329a73c8e79ed5938717a5cb3d21" platform="linux/arm64"
	I0930 10:22:51.473317       6 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0930 10:22:51.489287       6 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0930 10:22:51.498831       6 nginx.go:271] "Starting NGINX Ingress controller"
	I0930 10:22:51.508862       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"12220eba-5361-4cf1-a44f-13cb77cc563b", APIVersion:"v1", ResourceVersion:"709", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0930 10:22:51.517189       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"50ebfe59-347b-4363-a4c9-597f183a62d8", APIVersion:"v1", ResourceVersion:"710", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0930 10:22:51.517380       6 event.go:377] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"e3c27bc0-7211-467e-ab56-b712b31992b9", APIVersion:"v1", ResourceVersion:"711", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0930 10:22:52.700863       6 nginx.go:317] "Starting NGINX process"
	I0930 10:22:52.701115       6 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0930 10:22:52.701591       6 nginx.go:337] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0930 10:22:52.706635       6 controller.go:193] "Configuration changes detected, backend reload required"
	I0930 10:22:52.720230       6 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0930 10:22:52.720727       6 status.go:85] "New leader elected" identity="ingress-nginx-controller-bc57996ff-fhcnz"
	I0930 10:22:52.729684       6 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-bc57996ff-fhcnz" node="addons-703944"
	I0930 10:22:52.752734       6 controller.go:213] "Backend successfully reloaded"
	I0930 10:22:52.752945       6 controller.go:224] "Initial sync, sleeping for 1 second"
	I0930 10:22:52.753477       6 event.go:377] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-bc57996ff-fhcnz", UID:"475b22d0-6c5a-4aab-9cf1-9d3ebaf78a75", APIVersion:"v1", ResourceVersion:"736", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	  Build:         46e76e5916813cfca2a9b0bfdc34b69a0000f6b9
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.5
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [471d0bb84337] <==
	[INFO] 10.244.0.7:49312 - 29993 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000102874s
	[INFO] 10.244.0.7:49312 - 42514 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002011485s
	[INFO] 10.244.0.7:49312 - 7545 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002191642s
	[INFO] 10.244.0.7:49312 - 31021 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.00012749s
	[INFO] 10.244.0.7:49312 - 64634 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000102637s
	[INFO] 10.244.0.7:52056 - 13052 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000131158s
	[INFO] 10.244.0.7:52056 - 13248 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000078727s
	[INFO] 10.244.0.7:44412 - 15106 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000046941s
	[INFO] 10.244.0.7:44412 - 15559 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000064245s
	[INFO] 10.244.0.7:34827 - 23431 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000198308s
	[INFO] 10.244.0.7:34827 - 23587 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000057476s
	[INFO] 10.244.0.7:37206 - 40747 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001210437s
	[INFO] 10.244.0.7:37206 - 41208 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001179422s
	[INFO] 10.244.0.7:53392 - 17918 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000067011s
	[INFO] 10.244.0.7:53392 - 18073 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000069086s
	[INFO] 10.244.0.25:43132 - 54509 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000283488s
	[INFO] 10.244.0.25:59003 - 27505 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000151094s
	[INFO] 10.244.0.25:49456 - 63764 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117175s
	[INFO] 10.244.0.25:58881 - 43085 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000076724s
	[INFO] 10.244.0.25:59699 - 18237 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110151s
	[INFO] 10.244.0.25:41399 - 9170 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000070661s
	[INFO] 10.244.0.25:60955 - 30379 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002580124s
	[INFO] 10.244.0.25:45346 - 11744 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.007987233s
	[INFO] 10.244.0.25:32796 - 27339 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001568948s
	[INFO] 10.244.0.25:40716 - 63008 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001741899s
	
	
	==> describe nodes <==
	Name:               addons-703944
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-703944
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b68b4b088317c82ffa16da1c47933e77f0f5d128
	                    minikube.k8s.io/name=addons-703944
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_30T10_21_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-703944
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 30 Sep 2024 10:21:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-703944
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 30 Sep 2024 10:34:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 30 Sep 2024 10:30:06 +0000   Mon, 30 Sep 2024 10:21:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 30 Sep 2024 10:30:06 +0000   Mon, 30 Sep 2024 10:21:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 30 Sep 2024 10:30:06 +0000   Mon, 30 Sep 2024 10:21:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 30 Sep 2024 10:30:06 +0000   Mon, 30 Sep 2024 10:21:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-703944
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 976b457a99284b958149a831017d514d
	  System UUID:                c8b66987-d94a-48ea-9059-80a29a142280
	  Boot ID:                    12064027-174b-4ce0-8a4a-48eaa21ecbf6
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://27.3.1
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m18s
	  default                     cloud-spanner-emulator-5b584cc74-zl2c5      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gadget                      gadget-7txl9                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  gcp-auth                    gcp-auth-89d5ffd79-qbk9q                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-fhcnz    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         12m
	  kube-system                 coredns-7c65d6cfc9-whncm                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     12m
	  kube-system                 etcd-addons-703944                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         12m
	  kube-system                 kube-apiserver-addons-703944                250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-703944       200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-xl4mj                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-addons-703944                100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-72src             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         12m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  local-path-storage          local-path-provisioner-86d989889c-2w5jv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  0 (0%)
	  memory             460Mi (5%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 12m                kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m (x8 over 13m)  kubelet          Node addons-703944 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m (x7 over 13m)  kubelet          Node addons-703944 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    12m (x7 over 13m)  kubelet          Node addons-703944 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m                kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                kubelet          Node addons-703944 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m                kubelet          Node addons-703944 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                kubelet          Node addons-703944 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                node-controller  Node addons-703944 event: Registered Node addons-703944 in Controller
	
	
	==> dmesg <==
	[Sep30 10:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014927] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.458782] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.064452] systemd[1]: /lib/systemd/system/cloud-init.service:20: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.020217] systemd[1]: /lib/systemd/system/cloud-init-hotplugd.socket:11: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.681870] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.380136] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [32399c9ffe92] <==
	{"level":"info","ts":"2024-09-30T10:21:18.933375Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-30T10:21:18.933385Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-30T10:21:19.602529Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:19.602586Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:19.602621Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-30T10:21:19.602860Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:19.602949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:19.603063Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:19.603171Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-30T10:21:19.607726Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-703944 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-30T10:21:19.609625Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:21:19.609999Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:19.613574Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-30T10:21:19.615561Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-30T10:21:19.615686Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-30T10:21:19.615957Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:21:19.616489Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-30T10:21:19.616837Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-30T10:21:19.616944Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:19.617009Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:19.617038Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-30T10:21:19.617758Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-30T10:31:21.059348Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1855}
	{"level":"info","ts":"2024-09-30T10:31:21.108697Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1855,"took":"48.522977ms","hash":2943355290,"current-db-size-bytes":8835072,"current-db-size":"8.8 MB","current-db-size-in-use-bytes":4743168,"current-db-size-in-use":"4.7 MB"}
	{"level":"info","ts":"2024-09-30T10:31:21.108748Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2943355290,"revision":1855,"compact-revision":-1}
	
	
	==> gcp-auth [e56e6fc59f85] <==
	2024/09/30 10:24:17 GCP Auth Webhook started!
	2024/09/30 10:24:35 Ready to marshal response ...
	2024/09/30 10:24:35 Ready to write response ...
	2024/09/30 10:24:35 Ready to marshal response ...
	2024/09/30 10:24:35 Ready to write response ...
	2024/09/30 10:24:59 Ready to marshal response ...
	2024/09/30 10:24:59 Ready to write response ...
	2024/09/30 10:24:59 Ready to marshal response ...
	2024/09/30 10:24:59 Ready to write response ...
	2024/09/30 10:25:00 Ready to marshal response ...
	2024/09/30 10:25:00 Ready to write response ...
	2024/09/30 10:33:15 Ready to marshal response ...
	2024/09/30 10:33:15 Ready to write response ...
	2024/09/30 10:33:23 Ready to marshal response ...
	2024/09/30 10:33:23 Ready to write response ...
	2024/09/30 10:33:32 Ready to marshal response ...
	2024/09/30 10:33:32 Ready to write response ...
	2024/09/30 10:33:56 Ready to marshal response ...
	2024/09/30 10:33:56 Ready to write response ...
	2024/09/30 10:33:56 Ready to marshal response ...
	2024/09/30 10:33:56 Ready to write response ...
	2024/09/30 10:34:05 Ready to marshal response ...
	2024/09/30 10:34:05 Ready to write response ...
	
	
	==> kernel <==
	 10:34:17 up 16 min,  0 users,  load average: 2.29, 1.08, 0.72
	Linux addons-703944 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [bd7f169d8e3e] <==
	I0930 10:24:49.654459       1 handler.go:286] Adding GroupVersion scheduling.volcano.sh v1beta1 to ResourceManager
	I0930 10:24:50.033075       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0930 10:24:50.118747       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	I0930 10:24:50.357736       1 handler.go:286] Adding GroupVersion flow.volcano.sh v1alpha1 to ResourceManager
	W0930 10:24:50.460773       1 cacher.go:171] Terminating all watchers from cacher commands.bus.volcano.sh
	W0930 10:24:50.655313       1 cacher.go:171] Terminating all watchers from cacher podgroups.scheduling.volcano.sh
	W0930 10:24:50.743787       1 cacher.go:171] Terminating all watchers from cacher numatopologies.nodeinfo.volcano.sh
	W0930 10:24:50.872349       1 cacher.go:171] Terminating all watchers from cacher queues.scheduling.volcano.sh
	W0930 10:24:50.976768       1 cacher.go:171] Terminating all watchers from cacher jobs.batch.volcano.sh
	W0930 10:24:51.358383       1 cacher.go:171] Terminating all watchers from cacher jobflows.flow.volcano.sh
	W0930 10:24:51.456955       1 cacher.go:171] Terminating all watchers from cacher jobtemplates.flow.volcano.sh
	I0930 10:33:29.315948       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0930 10:33:49.317208       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:49.317255       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:33:49.350071       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:49.350325       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:33:49.360690       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:49.360738       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:33:49.385490       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:49.385542       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0930 10:33:49.418462       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0930 10:33:49.418500       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0930 10:33:50.361596       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0930 10:33:50.418080       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0930 10:33:50.496229       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [8fe61e0b6c18] <==
	E0930 10:33:53.713971       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:53.952917       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:53.952968       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:57.488331       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:57.488376       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:58.287352       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:58.287398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:33:59.987619       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:33:59.987665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:34:00.816577       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0930 10:34:00.816711       1 shared_informer.go:320] Caches are synced for resource quota
	I0930 10:34:01.070530       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0930 10:34:01.070584       1 shared_informer.go:320] Caches are synced for garbage collector
	W0930 10:34:05.151519       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:05.151594       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:34:05.967509       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="8.787µs"
	W0930 10:34:06.432730       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:06.432771       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:34:07.136754       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:07.136800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:34:10.275694       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:10.275734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0930 10:34:12.794660       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0930 10:34:12.794704       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0930 10:34:16.101514       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="6.76µs"
	
	
	==> kube-proxy [cf5b880fad34] <==
	I0930 10:21:32.314157       1 server_linux.go:66] "Using iptables proxy"
	I0930 10:21:32.422183       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0930 10:21:32.422240       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0930 10:21:32.448738       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0930 10:21:32.448809       1 server_linux.go:169] "Using iptables Proxier"
	I0930 10:21:32.450714       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0930 10:21:32.451032       1 server.go:483] "Version info" version="v1.31.1"
	I0930 10:21:32.451048       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0930 10:21:32.453292       1 config.go:199] "Starting service config controller"
	I0930 10:21:32.453333       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0930 10:21:32.453364       1 config.go:105] "Starting endpoint slice config controller"
	I0930 10:21:32.453375       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0930 10:21:32.465704       1 config.go:328] "Starting node config controller"
	I0930 10:21:32.465723       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0930 10:21:32.554600       1 shared_informer.go:320] Caches are synced for service config
	I0930 10:21:32.554710       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0930 10:21:32.566742       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [9f4afc2251bd] <==
	E0930 10:21:23.704809       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0930 10:21:23.704779       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.704940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0930 10:21:23.704964       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.705114       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0930 10:21:23.705273       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0930 10:21:23.705386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0930 10:21:23.705660       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.705800       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:23.706917       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.705875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:23.706973       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.705912       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0930 10:21:23.706994       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.705920       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0930 10:21:23.707013       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.705994       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 10:21:23.707057       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.706032       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0930 10:21:23.707102       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:23.706088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0930 10:21:23.707129       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0930 10:21:24.573771       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0930 10:21:24.573811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0930 10:21:24.995310       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 30 10:34:07 addons-703944 kubelet[2330]: I0930 10:34:07.623286    2330 reconciler_common.go:288] "Volume detached for volume \"data\" (UniqueName: \"kubernetes.io/host-path/3283b04e-4ea2-4110-966e-2e42c30b934a-data\") on node \"addons-703944\" DevicePath \"\""
	Sep 30 10:34:07 addons-703944 kubelet[2330]: I0930 10:34:07.623336    2330 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-x6vj2\" (UniqueName: \"kubernetes.io/projected/3283b04e-4ea2-4110-966e-2e42c30b934a-kube-api-access-x6vj2\") on node \"addons-703944\" DevicePath \"\""
	Sep 30 10:34:07 addons-703944 kubelet[2330]: I0930 10:34:07.623349    2330 reconciler_common.go:288] "Volume detached for volume \"script\" (UniqueName: \"kubernetes.io/configmap/3283b04e-4ea2-4110-966e-2e42c30b934a-script\") on node \"addons-703944\" DevicePath \"\""
	Sep 30 10:34:07 addons-703944 kubelet[2330]: I0930 10:34:07.623359    2330 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/3283b04e-4ea2-4110-966e-2e42c30b934a-gcp-creds\") on node \"addons-703944\" DevicePath \"\""
	Sep 30 10:34:08 addons-703944 kubelet[2330]: I0930 10:34:08.287288    2330 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ac43b835fe5c9ab9556c5d49ad01846b174068ec13b28cfb27541adf9de723c"
	Sep 30 10:34:08 addons-703944 kubelet[2330]: E0930 10:34:08.824841    2330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="f5cee3a4-bea9-470a-ace6-39db000ad219"
	Sep 30 10:34:11 addons-703944 kubelet[2330]: I0930 10:34:11.834311    2330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3283b04e-4ea2-4110-966e-2e42c30b934a" path="/var/lib/kubelet/pods/3283b04e-4ea2-4110-966e-2e42c30b934a/volumes"
	Sep 30 10:34:12 addons-703944 kubelet[2330]: E0930 10:34:12.825267    2330 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-test\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox\\\"\"" pod="default/registry-test" podUID="f7c8a150-0489-4468-a63f-4623d31323a7"
	Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.674765    2330 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f7c8a150-0489-4468-a63f-4623d31323a7-gcp-creds\") pod \"f7c8a150-0489-4468-a63f-4623d31323a7\" (UID: \"f7c8a150-0489-4468-a63f-4623d31323a7\") "
	Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.674845    2330 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lpdn\" (UniqueName: \"kubernetes.io/projected/f7c8a150-0489-4468-a63f-4623d31323a7-kube-api-access-2lpdn\") pod \"f7c8a150-0489-4468-a63f-4623d31323a7\" (UID: \"f7c8a150-0489-4468-a63f-4623d31323a7\") "
	Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.675270    2330 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7c8a150-0489-4468-a63f-4623d31323a7-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "f7c8a150-0489-4468-a63f-4623d31323a7" (UID: "f7c8a150-0489-4468-a63f-4623d31323a7"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.679451    2330 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c8a150-0489-4468-a63f-4623d31323a7-kube-api-access-2lpdn" (OuterVolumeSpecName: "kube-api-access-2lpdn") pod "f7c8a150-0489-4468-a63f-4623d31323a7" (UID: "f7c8a150-0489-4468-a63f-4623d31323a7"). InnerVolumeSpecName "kube-api-access-2lpdn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.775222    2330 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-2lpdn\" (UniqueName: \"kubernetes.io/projected/f7c8a150-0489-4468-a63f-4623d31323a7-kube-api-access-2lpdn\") on node \"addons-703944\" DevicePath \"\""
	Sep 30 10:34:15 addons-703944 kubelet[2330]: I0930 10:34:15.775268    2330 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/f7c8a150-0489-4468-a63f-4623d31323a7-gcp-creds\") on node \"addons-703944\" DevicePath \"\""
	Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.682772    2330 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kg4f\" (UniqueName: \"kubernetes.io/projected/1071ed50-a346-48af-bd60-fb6e526e1d58-kube-api-access-4kg4f\") pod \"1071ed50-a346-48af-bd60-fb6e526e1d58\" (UID: \"1071ed50-a346-48af-bd60-fb6e526e1d58\") "
	Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.688622    2330 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1071ed50-a346-48af-bd60-fb6e526e1d58-kube-api-access-4kg4f" (OuterVolumeSpecName: "kube-api-access-4kg4f") pod "1071ed50-a346-48af-bd60-fb6e526e1d58" (UID: "1071ed50-a346-48af-bd60-fb6e526e1d58"). InnerVolumeSpecName "kube-api-access-4kg4f". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.783705    2330 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjg79\" (UniqueName: \"kubernetes.io/projected/a0c7860c-3f6b-40f2-9761-cd6466b5e812-kube-api-access-pjg79\") pod \"a0c7860c-3f6b-40f2-9761-cd6466b5e812\" (UID: \"a0c7860c-3f6b-40f2-9761-cd6466b5e812\") "
	Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.784156    2330 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-4kg4f\" (UniqueName: \"kubernetes.io/projected/1071ed50-a346-48af-bd60-fb6e526e1d58-kube-api-access-4kg4f\") on node \"addons-703944\" DevicePath \"\""
	Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.785700    2330 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c7860c-3f6b-40f2-9761-cd6466b5e812-kube-api-access-pjg79" (OuterVolumeSpecName: "kube-api-access-pjg79") pod "a0c7860c-3f6b-40f2-9761-cd6466b5e812" (UID: "a0c7860c-3f6b-40f2-9761-cd6466b5e812"). InnerVolumeSpecName "kube-api-access-pjg79". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 30 10:34:16 addons-703944 kubelet[2330]: I0930 10:34:16.884537    2330 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-pjg79\" (UniqueName: \"kubernetes.io/projected/a0c7860c-3f6b-40f2-9761-cd6466b5e812-kube-api-access-pjg79\") on node \"addons-703944\" DevicePath \"\""
	Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.716715    2330 scope.go:117] "RemoveContainer" containerID="4d968f7e8c938ada722f733a6fcef97b5da7b2c4fdba2828ae041467ae711d62"
	Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.783182    2330 scope.go:117] "RemoveContainer" containerID="8ff34a9a05ef2b99e0385cd38068272c1da8ac2ac4042e5f40e6d69ea7e24829"
	Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.837436    2330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1071ed50-a346-48af-bd60-fb6e526e1d58" path="/var/lib/kubelet/pods/1071ed50-a346-48af-bd60-fb6e526e1d58/volumes"
	Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.837821    2330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c7860c-3f6b-40f2-9761-cd6466b5e812" path="/var/lib/kubelet/pods/a0c7860c-3f6b-40f2-9761-cd6466b5e812/volumes"
	Sep 30 10:34:17 addons-703944 kubelet[2330]: I0930 10:34:17.838196    2330 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c8a150-0489-4468-a63f-4623d31323a7" path="/var/lib/kubelet/pods/f7c8a150-0489-4468-a63f-4623d31323a7/volumes"
	
	
	==> storage-provisioner [9b7883641a7b] <==
	I0930 10:21:38.706421       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0930 10:21:38.722907       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0930 10:21:38.722958       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0930 10:21:38.734690       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0930 10:21:38.736897       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-703944_7e616623-6de1-49c2-b745-f59394bb4ffc!
	I0930 10:21:38.747241       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7c375b4-4b70-4b95-ad2d-54a4fbae59e9", APIVersion:"v1", ResourceVersion:"604", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-703944_7e616623-6de1-49c2-b745-f59394bb4ffc became leader
	I0930 10:21:38.838085       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-703944_7e616623-6de1-49c2-b745-f59394bb4ffc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-703944 -n addons-703944
helpers_test.go:261: (dbg) Run:  kubectl --context addons-703944 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-9prwn ingress-nginx-admission-patch-gwx9r
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-703944 describe pod busybox ingress-nginx-admission-create-9prwn ingress-nginx-admission-patch-gwx9r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-703944 describe pod busybox ingress-nginx-admission-create-9prwn ingress-nginx-admission-patch-gwx9r: exit status 1 (105.395608ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-703944/192.168.49.2
	Start Time:       Mon, 30 Sep 2024 10:24:59 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.27
	IPs:
	  IP:  10.244.0.27
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hwfpp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-hwfpp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m19s                  default-scheduler  Successfully assigned default/busybox to addons-703944
	  Warning  Failed     7m56s (x6 over 9m17s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    7m41s (x4 over 9m18s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m40s (x4 over 9m18s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": Error response from daemon: Head "https://gcr.io/v2/k8s-minikube/busybox/manifests/1.28.4-glibc": unauthorized: authentication failed
	  Warning  Failed     7m40s (x4 over 9m18s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4m9s (x21 over 9m17s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-9prwn" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-gwx9r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-703944 describe pod busybox ingress-nginx-admission-create-9prwn ingress-nginx-admission-patch-gwx9r: exit status 1
--- FAIL: TestAddons/parallel/Registry (75.49s)

                                                
                                    

Test pass (318/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.07
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.19
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 4.11
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.19
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.58
22 TestOffline 89.52
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.1
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 220.75
29 TestAddons/serial/Volcano 40.77
31 TestAddons/serial/GCPAuth/Namespaces 0.17
34 TestAddons/parallel/Ingress 18.31
35 TestAddons/parallel/InspektorGadget 11.89
36 TestAddons/parallel/MetricsServer 6.67
38 TestAddons/parallel/CSI 34.95
39 TestAddons/parallel/Headlamp 16.68
40 TestAddons/parallel/CloudSpanner 6.51
41 TestAddons/parallel/LocalPath 52.52
42 TestAddons/parallel/NvidiaDevicePlugin 6.5
43 TestAddons/parallel/Yakd 11.63
44 TestAddons/StoppedEnableDisable 5.95
45 TestCertOptions 34.4
46 TestCertExpiration 247.3
47 TestDockerFlags 40.36
48 TestForceSystemdFlag 38.01
49 TestForceSystemdEnv 40.98
55 TestErrorSpam/setup 29.92
56 TestErrorSpam/start 0.73
57 TestErrorSpam/status 0.99
58 TestErrorSpam/pause 1.28
59 TestErrorSpam/unpause 1.43
60 TestErrorSpam/stop 1.99
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 38.22
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 26.55
67 TestFunctional/serial/KubeContext 0.05
68 TestFunctional/serial/KubectlGetPods 0.1
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.14
72 TestFunctional/serial/CacheCmd/cache/add_local 0.93
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
74 TestFunctional/serial/CacheCmd/cache/list 0.08
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.5
77 TestFunctional/serial/CacheCmd/cache/delete 0.11
78 TestFunctional/serial/MinikubeKubectlCmd 0.14
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
80 TestFunctional/serial/ExtraConfig 42.67
81 TestFunctional/serial/ComponentHealth 0.09
82 TestFunctional/serial/LogsCmd 1.16
83 TestFunctional/serial/LogsFileCmd 1.17
84 TestFunctional/serial/InvalidService 4.47
86 TestFunctional/parallel/ConfigCmd 0.44
87 TestFunctional/parallel/DashboardCmd 10.26
88 TestFunctional/parallel/DryRun 0.39
89 TestFunctional/parallel/InternationalLanguage 0.17
90 TestFunctional/parallel/StatusCmd 1.18
94 TestFunctional/parallel/ServiceCmdConnect 7.66
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 26.89
98 TestFunctional/parallel/SSHCmd 0.73
99 TestFunctional/parallel/CpCmd 1.93
101 TestFunctional/parallel/FileSync 0.39
102 TestFunctional/parallel/CertSync 1.92
106 TestFunctional/parallel/NodeLabels 0.13
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.34
110 TestFunctional/parallel/License 0.25
111 TestFunctional/parallel/Version/short 0.08
112 TestFunctional/parallel/Version/components 1
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
117 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
118 TestFunctional/parallel/ImageCommands/Setup 0.63
119 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.19
120 TestFunctional/parallel/DockerEnv/bash 1.25
121 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.9
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.18
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.18
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.6
128 TestFunctional/parallel/ServiceCmd/DeployApp 11.27
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.7
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.47
136 TestFunctional/parallel/ServiceCmd/List 0.32
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.32
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
139 TestFunctional/parallel/ServiceCmd/Format 0.38
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
146 TestFunctional/parallel/ServiceCmd/URL 0.36
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.55
148 TestFunctional/parallel/ProfileCmd/profile_list 0.51
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
150 TestFunctional/parallel/MountCmd/any-port 6.93
151 TestFunctional/parallel/MountCmd/specific-port 1.87
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 123.31
160 TestMultiControlPlane/serial/DeployApp 7.79
161 TestMultiControlPlane/serial/PingHostFromPods 1.68
162 TestMultiControlPlane/serial/AddWorkerNode 22.74
163 TestMultiControlPlane/serial/NodeLabels 0.09
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.97
165 TestMultiControlPlane/serial/CopyFile 18.38
166 TestMultiControlPlane/serial/StopSecondaryNode 11.73
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.73
168 TestMultiControlPlane/serial/RestartSecondaryNode 54.45
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.03
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 178.98
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.22
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
173 TestMultiControlPlane/serial/StopCluster 32.59
174 TestMultiControlPlane/serial/RestartCluster 166.88
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.71
176 TestMultiControlPlane/serial/AddSecondaryNode 44.35
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1
180 TestImageBuild/serial/Setup 29.46
181 TestImageBuild/serial/NormalBuild 1.95
182 TestImageBuild/serial/BuildWithBuildArg 0.99
183 TestImageBuild/serial/BuildWithDockerIgnore 0.81
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.84
188 TestJSONOutput/start/Command 75.24
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.54
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.5
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 5.76
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.21
213 TestKicCustomNetwork/create_custom_network 34.23
214 TestKicCustomNetwork/use_default_bridge_network 30.64
215 TestKicExistingNetwork 31.66
216 TestKicCustomSubnet 33.71
217 TestKicStaticIP 31.59
218 TestMainNoArgs 0.05
219 TestMinikubeProfile 69.73
222 TestMountStart/serial/StartWithMountFirst 7.77
223 TestMountStart/serial/VerifyMountFirst 0.24
224 TestMountStart/serial/StartWithMountSecond 7.18
225 TestMountStart/serial/VerifyMountSecond 0.25
226 TestMountStart/serial/DeleteFirst 1.47
227 TestMountStart/serial/VerifyMountPostDelete 0.24
228 TestMountStart/serial/Stop 1.2
229 TestMountStart/serial/RestartStopped 8.03
230 TestMountStart/serial/VerifyMountPostStop 0.25
233 TestMultiNode/serial/FreshStart2Nodes 80.61
234 TestMultiNode/serial/DeployApp2Nodes 43.81
235 TestMultiNode/serial/PingHostFrom2Pods 0.99
236 TestMultiNode/serial/AddNode 17.47
237 TestMultiNode/serial/MultiNodeLabels 0.09
238 TestMultiNode/serial/ProfileList 0.69
239 TestMultiNode/serial/CopyFile 9.51
240 TestMultiNode/serial/StopNode 2.21
241 TestMultiNode/serial/StartAfterStop 10.87
242 TestMultiNode/serial/RestartKeepsNodes 99.77
243 TestMultiNode/serial/DeleteNode 5.54
244 TestMultiNode/serial/StopMultiNode 21.51
245 TestMultiNode/serial/RestartMultiNode 49.93
246 TestMultiNode/serial/ValidateNameConflict 34.77
251 TestPreload 136.38
253 TestScheduledStopUnix 107.05
254 TestSkaffold 112.72
256 TestInsufficientStorage 10.36
257 TestRunningBinaryUpgrade 76.35
259 TestKubernetesUpgrade 383.31
260 TestMissingContainerUpgrade 163.62
262 TestPause/serial/Start 88.46
263 TestPause/serial/SecondStartNoReconfiguration 30.32
264 TestPause/serial/Pause 0.75
265 TestPause/serial/VerifyStatus 0.36
266 TestPause/serial/Unpause 0.7
267 TestPause/serial/PauseAgain 1.09
268 TestPause/serial/DeletePaused 2.96
269 TestPause/serial/VerifyDeletedResources 0.11
270 TestStoppedBinaryUpgrade/Setup 0.63
271 TestStoppedBinaryUpgrade/Upgrade 83.06
272 TestStoppedBinaryUpgrade/MinikubeLogs 2
281 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
282 TestNoKubernetes/serial/StartWithK8s 41.52
283 TestNoKubernetes/serial/StartWithStopK8s 16.46
295 TestNoKubernetes/serial/Start 10.17
296 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
297 TestNoKubernetes/serial/ProfileList 1.17
298 TestNoKubernetes/serial/Stop 1.26
299 TestNoKubernetes/serial/StartNoArgs 7.54
300 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
302 TestStartStop/group/old-k8s-version/serial/FirstStart 162.52
304 TestStartStop/group/no-preload/serial/FirstStart 56.83
305 TestStartStop/group/old-k8s-version/serial/DeployApp 10.7
306 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.05
307 TestStartStop/group/old-k8s-version/serial/Stop 11.09
308 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
309 TestStartStop/group/old-k8s-version/serial/SecondStart 369.19
310 TestStartStop/group/no-preload/serial/DeployApp 9.51
311 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.03
312 TestStartStop/group/no-preload/serial/Stop 10.93
313 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
314 TestStartStop/group/no-preload/serial/SecondStart 266.37
315 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
317 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.23
318 TestStartStop/group/no-preload/serial/Pause 2.82
320 TestStartStop/group/embed-certs/serial/FirstStart 51.31
321 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.12
323 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
324 TestStartStop/group/old-k8s-version/serial/Pause 2.85
325 TestStartStop/group/embed-certs/serial/DeployApp 11.44
327 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 81.12
328 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.29
329 TestStartStop/group/embed-certs/serial/Stop 11.18
330 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
331 TestStartStop/group/embed-certs/serial/SecondStart 270.76
332 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.38
333 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
334 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.8
335 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
336 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 266.95
337 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
338 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.1
339 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
340 TestStartStop/group/embed-certs/serial/Pause 2.65
342 TestStartStop/group/newest-cni/serial/FirstStart 39.63
343 TestStartStop/group/newest-cni/serial/DeployApp 0
344 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.03
345 TestStartStop/group/newest-cni/serial/Stop 11.15
346 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
347 TestStartStop/group/newest-cni/serial/SecondStart 18.37
348 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
349 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
353 TestStartStop/group/newest-cni/serial/Pause 2.68
354 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
355 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.57
356 TestNetworkPlugins/group/auto/Start 54.79
357 TestNetworkPlugins/group/kindnet/Start 71.13
358 TestNetworkPlugins/group/auto/KubeletFlags 0.4
359 TestNetworkPlugins/group/auto/NetCatPod 15.37
360 TestNetworkPlugins/group/auto/DNS 0.27
361 TestNetworkPlugins/group/auto/Localhost 0.26
362 TestNetworkPlugins/group/auto/HairPin 0.23
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.34
365 TestNetworkPlugins/group/kindnet/NetCatPod 10.33
366 TestNetworkPlugins/group/calico/Start 73.89
367 TestNetworkPlugins/group/kindnet/DNS 0.2
368 TestNetworkPlugins/group/kindnet/Localhost 0.22
369 TestNetworkPlugins/group/kindnet/HairPin 0.21
370 TestNetworkPlugins/group/custom-flannel/Start 63.63
371 TestNetworkPlugins/group/calico/ControllerPod 6.01
372 TestNetworkPlugins/group/calico/KubeletFlags 0.39
373 TestNetworkPlugins/group/calico/NetCatPod 13.38
374 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
375 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.26
376 TestNetworkPlugins/group/calico/DNS 0.26
377 TestNetworkPlugins/group/calico/Localhost 0.19
378 TestNetworkPlugins/group/calico/HairPin 0.2
379 TestNetworkPlugins/group/custom-flannel/DNS 0.28
380 TestNetworkPlugins/group/custom-flannel/Localhost 0.28
381 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
382 TestNetworkPlugins/group/false/Start 88.15
383 TestNetworkPlugins/group/enable-default-cni/Start 76.56
384 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
385 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.29
386 TestNetworkPlugins/group/false/KubeletFlags 0.34
387 TestNetworkPlugins/group/false/NetCatPod 11.32
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.16
391 TestNetworkPlugins/group/false/DNS 0.19
392 TestNetworkPlugins/group/false/Localhost 0.16
393 TestNetworkPlugins/group/false/HairPin 0.16
394 TestNetworkPlugins/group/flannel/Start 59.94
395 TestNetworkPlugins/group/bridge/Start 74.8
396 TestNetworkPlugins/group/flannel/ControllerPod 6.01
397 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
398 TestNetworkPlugins/group/flannel/NetCatPod 10.24
399 TestNetworkPlugins/group/flannel/DNS 0.21
400 TestNetworkPlugins/group/flannel/Localhost 0.16
401 TestNetworkPlugins/group/flannel/HairPin 0.17
402 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
403 TestNetworkPlugins/group/bridge/NetCatPod 11.25
404 TestNetworkPlugins/group/bridge/DNS 0.33
405 TestNetworkPlugins/group/bridge/Localhost 0.22
406 TestNetworkPlugins/group/bridge/HairPin 0.24
407 TestNetworkPlugins/group/kubenet/Start 46.54
408 TestNetworkPlugins/group/kubenet/KubeletFlags 0.27
409 TestNetworkPlugins/group/kubenet/NetCatPod 12.24
410 TestNetworkPlugins/group/kubenet/DNS 16.22
411 TestNetworkPlugins/group/kubenet/Localhost 0.15
412 TestNetworkPlugins/group/kubenet/HairPin 0.2
x
+
TestDownloadOnly/v1.20.0/json-events (6.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-464574 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-464574 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.068132191s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0930 10:20:31.809004    7606 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0930 10:20:31.809090    7606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-464574
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-464574: exit status 85 (65.75527ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-464574 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |          |
	|         | -p download-only-464574        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:25.777824    7611 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:25.778024    7611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:25.778050    7611 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:25.778071    7611 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:25.778341    7611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	W0930 10:20:25.778502    7611 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19734-2285/.minikube/config/config.json: open /home/jenkins/minikube-integration/19734-2285/.minikube/config/config.json: no such file or directory
	I0930 10:20:25.778953    7611 out.go:352] Setting JSON to true
	I0930 10:20:25.779796    7611 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":174,"bootTime":1727691452,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0930 10:20:25.779889    7611 start.go:139] virtualization:  
	I0930 10:20:25.783117    7611 out.go:97] [download-only-464574] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0930 10:20:25.783246    7611 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball: no such file or directory
	I0930 10:20:25.783358    7611 notify.go:220] Checking for updates...
	I0930 10:20:25.785250    7611 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:20:25.787214    7611 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:25.788982    7611 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	I0930 10:20:25.790612    7611 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	I0930 10:20:25.792321    7611 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0930 10:20:25.795376    7611 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:20:25.795658    7611 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:20:25.817279    7611 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:20:25.817395    7611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:26.117088    7611 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:20:26.107802079 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:20:26.117196    7611 docker.go:318] overlay module found
	I0930 10:20:26.119099    7611 out.go:97] Using the docker driver based on user configuration
	I0930 10:20:26.119125    7611 start.go:297] selected driver: docker
	I0930 10:20:26.119132    7611 start.go:901] validating driver "docker" against <nil>
	I0930 10:20:26.119256    7611 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:26.168581    7611 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:20:26.159826433 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:20:26.168784    7611 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:20:26.169075    7611 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0930 10:20:26.169250    7611 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:20:26.171142    7611 out.go:169] Using Docker driver with root privileges
	I0930 10:20:26.172507    7611 cni.go:84] Creating CNI manager for ""
	I0930 10:20:26.172578    7611 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0930 10:20:26.172654    7611 start.go:340] cluster config:
	{Name:download-only-464574 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-464574 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:20:26.174530    7611 out.go:97] Starting "download-only-464574" primary control-plane node in "download-only-464574" cluster
	I0930 10:20:26.174548    7611 cache.go:121] Beginning downloading kic base image for docker with docker
	I0930 10:20:26.176395    7611 out.go:97] Pulling base image v0.0.45-1727108449-19696 ...
	I0930 10:20:26.176418    7611 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 10:20:26.176454    7611 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local docker daemon
	I0930 10:20:26.191687    7611 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:20:26.191867    7611 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 in local cache directory
	I0930 10:20:26.191975    7611 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 to local cache
	I0930 10:20:26.244625    7611 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 10:20:26.244651    7611 cache.go:56] Caching tarball of preloaded images
	I0930 10:20:26.244826    7611 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0930 10:20:26.247082    7611 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0930 10:20:26.247098    7611 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0930 10:20:26.327513    7611 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4?checksum=md5:1a3e8f9b29e6affec63d76d0d3000942 -> /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4
	I0930 10:20:30.109700    7611 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	I0930 10:20:30.109849    7611 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-464574 host does not exist
	  To start a cluster, run: "minikube start -p download-only-464574"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-464574
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (4.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-328857 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-328857 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.105714131s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (4.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0930 10:20:36.298384    7606 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime docker
I0930 10:20:36.298420    7606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19734-2285/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-328857
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-328857: exit status 85 (64.444495ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-464574 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p download-only-464574        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| delete  | -p download-only-464574        | download-only-464574 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC | 30 Sep 24 10:20 UTC |
	| start   | -o=json --download-only        | download-only-328857 | jenkins | v1.34.0 | 30 Sep 24 10:20 UTC |                     |
	|         | -p download-only-328857        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/30 10:20:32
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0930 10:20:32.234624    7809 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:20:32.235082    7809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:32.235096    7809 out.go:358] Setting ErrFile to fd 2...
	I0930 10:20:32.235102    7809 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:20:32.235350    7809 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	I0930 10:20:32.235793    7809 out.go:352] Setting JSON to true
	I0930 10:20:32.236511    7809 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":181,"bootTime":1727691452,"procs":149,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0930 10:20:32.236580    7809 start.go:139] virtualization:  
	I0930 10:20:32.238743    7809 out.go:97] [download-only-328857] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:20:32.238977    7809 notify.go:220] Checking for updates...
	I0930 10:20:32.240750    7809 out.go:169] MINIKUBE_LOCATION=19734
	I0930 10:20:32.242752    7809 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:20:32.244823    7809 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	I0930 10:20:32.246580    7809 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	I0930 10:20:32.248556    7809 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0930 10:20:32.252012    7809 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0930 10:20:32.252242    7809 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:20:32.279252    7809 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:20:32.279362    7809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:32.334366    7809 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-30 10:20:32.324942873 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:20:32.334477    7809 docker.go:318] overlay module found
	I0930 10:20:32.336753    7809 out.go:97] Using the docker driver based on user configuration
	I0930 10:20:32.336785    7809 start.go:297] selected driver: docker
	I0930 10:20:32.336792    7809 start.go:901] validating driver "docker" against <nil>
	I0930 10:20:32.336899    7809 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:20:32.392288    7809 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-30 10:20:32.382823625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:20:32.392497    7809 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0930 10:20:32.392778    7809 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0930 10:20:32.392929    7809 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0930 10:20:32.395635    7809 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-328857 host does not exist
	  To start a cluster, run: "minikube start -p download-only-328857"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-328857
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I0930 10:20:37.431520    7606 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-159609 --alsologtostderr --binary-mirror http://127.0.0.1:34175 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-159609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-159609
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (89.52s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-318038 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-318038 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (1m27.364333826s)
helpers_test.go:175: Cleaning up "offline-docker-318038" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-318038
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-318038: (2.155973512s)
--- PASS: TestOffline (89.52s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-703944
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-703944: exit status 85 (102.227319ms)

                                                
                                                
-- stdout --
	* Profile "addons-703944" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-703944"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-703944
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-703944: exit status 85 (85.538709ms)

                                                
                                                
-- stdout --
	* Profile "addons-703944" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-703944"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (220.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-703944 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-703944 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns: (3m40.750483709s)
--- PASS: TestAddons/Setup (220.75s)

                                                
                                    
x
+
TestAddons/serial/Volcano (40.77s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:835: volcano-scheduler stabilized in 56.380816ms
addons_test.go:843: volcano-admission stabilized in 56.703869ms
addons_test.go:851: volcano-controller stabilized in 56.730035ms
addons_test.go:857: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-6c9778cbdf-psbnr" [1e41e1e4-0772-43f5-93e1-ef74967f3da3] Running
addons_test.go:857: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003190691s
addons_test.go:861: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5874dfdd79-66xcs" [e4d09124-b37f-4436-b719-f788545f8d58] Running
addons_test.go:861: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004159368s
addons_test.go:865: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-789ffc5785-9mtjs" [fc276df2-a8ff-495f-b35d-c90c16a01d7a] Running
addons_test.go:865: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003773846s
addons_test.go:870: (dbg) Run:  kubectl --context addons-703944 delete -n volcano-system job volcano-admission-init
addons_test.go:876: (dbg) Run:  kubectl --context addons-703944 create -f testdata/vcjob.yaml
addons_test.go:884: (dbg) Run:  kubectl --context addons-703944 get vcjob -n my-volcano
addons_test.go:902: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [e460362b-9920-4439-931b-655a700fbb26] Pending
helpers_test.go:344: "test-job-nginx-0" [e460362b-9920-4439-931b-655a700fbb26] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [e460362b-9920-4439-931b-655a700fbb26] Running
addons_test.go:902: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 13.002960012s
addons_test.go:906: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable volcano --alsologtostderr -v=1
addons_test.go:906: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 addons disable volcano --alsologtostderr -v=1: (11.145260532s)
--- PASS: TestAddons/serial/Volcano (40.77s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-703944 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-703944 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-703944 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-703944 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-703944 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [50c65744-2f5d-4ad4-be4f-c093cd0f1876] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [50c65744-2f5d-4ad4-be4f-c093cd0f1876] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003567457s
I0930 10:34:57.354003    7606 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:284: (dbg) Run:  kubectl --context addons-703944 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 addons disable ingress-dns --alsologtostderr -v=1: (1.151228825s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 addons disable ingress --alsologtostderr -v=1: (7.672257594s)
--- PASS: TestAddons/parallel/Ingress (18.31s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-7txl9" [68a99b26-4fed-43ff-be54-bce8abbabadf] Running
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004040678s
addons_test.go:789: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-703944
addons_test.go:789: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-703944: (5.883079947s)
--- PASS: TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.67s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:405: metrics-server stabilized in 2.324856ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-72src" [2328d76c-f121-44e6-894a-b82153cbb0b7] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0030012s
addons_test.go:413: (dbg) Run:  kubectl --context addons-703944 top pods -n kube-system
addons_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.67s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0930 10:33:14.799747    7606 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0930 10:33:14.805153    7606 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0930 10:33:14.805181    7606 kapi.go:107] duration metric: took 8.574993ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:505: csi-hostpath-driver pods stabilized in 8.584068ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-703944 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-703944 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a6c1d169-3b17-4653-b48a-25a48cd91dac] Pending
helpers_test.go:344: "task-pv-pod" [a6c1d169-3b17-4653-b48a-25a48cd91dac] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 6.003763799s
addons_test.go:528: (dbg) Run:  kubectl --context addons-703944 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-703944 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-703944 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-703944 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-703944 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-703944 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-703944 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [94f4f2b3-04c7-43eb-b75b-8b484b3709b8] Pending
helpers_test.go:344: "task-pv-pod-restore" [94f4f2b3-04c7-43eb-b75b-8b484b3709b8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [94f4f2b3-04c7-43eb-b75b-8b484b3709b8] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004752144s
addons_test.go:570: (dbg) Run:  kubectl --context addons-703944 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-703944 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-703944 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.695916181s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (34.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-703944 --alsologtostderr -v=1
addons_test.go:768: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-703944 --alsologtostderr -v=1: (1.000316609s)
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-692kr" [669cf1d2-20fa-4c40-b7c2-7d817c6634d8] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-692kr" [669cf1d2-20fa-4c40-b7c2-7d817c6634d8] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-692kr" [669cf1d2-20fa-4c40-b7c2-7d817c6634d8] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003750709s
addons_test.go:777: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 addons disable headlamp --alsologtostderr -v=1: (5.670729831s)
--- PASS: TestAddons/parallel/Headlamp (16.68s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-zl2c5" [e56cd41e-be01-4d8d-97d8-2c2eb0726033] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003640904s
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-703944
--- PASS: TestAddons/parallel/CloudSpanner (6.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.52s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-703944 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-703944 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-703944 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5fdc9d25-b409-4a1a-b577-09c4fe6ede4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5fdc9d25-b409-4a1a-b577-09c4fe6ede4e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5fdc9d25-b409-4a1a-b577-09c4fe6ede4e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003892706s
addons_test.go:938: (dbg) Run:  kubectl --context addons-703944 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 ssh "cat /opt/local-path-provisioner/pvc-c80c0af4-a393-4a05-9c4d-cc7ecf4f0af4_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-703944 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-703944 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.338738014s)
--- PASS: TestAddons/parallel/LocalPath (52.52s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ftwnl" [8b10a7e7-ec39-4b16-8d9f-33979a0e6e8d] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003924838s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-703944
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-d7q8s" [669dd1d0-232a-47fe-90d9-b8c2b494ef94] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003913174s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-arm64 -p addons-703944 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-arm64 -p addons-703944 addons disable yakd --alsologtostderr -v=1: (5.624388934s)
--- PASS: TestAddons/parallel/Yakd (11.63s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.95s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-703944
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-703944: (5.694222984s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-703944
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-703944
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-703944
--- PASS: TestAddons/StoppedEnableDisable (5.95s)

                                                
                                    
x
+
TestCertOptions (34.4s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-777081 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0930 11:19:07.631998    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:19:18.897994    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-777081 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (31.728650647s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-777081 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-777081 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-777081 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-777081" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-777081
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-777081: (2.075790312s)
--- PASS: TestCertOptions (34.40s)

                                                
                                    
x
+
TestCertExpiration (247.3s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-611916 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-611916 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (40.100231957s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-611916 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-611916 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (25.065691453s)
helpers_test.go:175: Cleaning up "cert-expiration-611916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-611916
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-611916: (2.128949682s)
--- PASS: TestCertExpiration (247.30s)

                                                
                                    
x
+
TestDockerFlags (40.36s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-342507 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0930 11:18:39.930575    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-342507 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (37.758824271s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-342507 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-342507 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-342507" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-342507
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-342507: (2.040706314s)
--- PASS: TestDockerFlags (40.36s)

                                                
                                    
x
+
TestForceSystemdFlag (38.01s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-764101 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0930 11:16:03.577247    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:16:23.788944    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-764101 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (35.429486956s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-764101 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-764101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-764101
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-764101: (2.265644683s)
--- PASS: TestForceSystemdFlag (38.01s)

                                                
                                    
x
+
TestForceSystemdEnv (40.98s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-213536 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-213536 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (38.212786631s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-213536 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-213536" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-213536
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-213536: (2.277476823s)
--- PASS: TestForceSystemdEnv (40.98s)

                                                
                                    
x
+
TestErrorSpam/setup (29.92s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-520820 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-520820 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-520820 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-520820 --driver=docker  --container-runtime=docker: (29.916865213s)
--- PASS: TestErrorSpam/setup (29.92s)

                                                
                                    
x
+
TestErrorSpam/start (0.73s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 start --dry-run
--- PASS: TestErrorSpam/start (0.73s)

                                                
                                    
x
+
TestErrorSpam/status (0.99s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 status
--- PASS: TestErrorSpam/status (0.99s)

                                                
                                    
x
+
TestErrorSpam/pause (1.28s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 pause
--- PASS: TestErrorSpam/pause (1.28s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 unpause
--- PASS: TestErrorSpam/unpause (1.43s)

                                                
                                    
x
+
TestErrorSpam/stop (1.99s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 stop: (1.807096511s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-520820 --log_dir /tmp/nospam-520820 stop
--- PASS: TestErrorSpam/stop (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19734-2285/.minikube/files/etc/test/nested/copy/7606/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (38.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656644 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-656644 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (38.217783498s)
--- PASS: TestFunctional/serial/StartWithProxy (38.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (26.55s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0930 10:36:31.929477    7606 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656644 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-656644 --alsologtostderr -v=8: (26.552236658s)
functional_test.go:663: soft start took 26.554584293s for "functional-656644" cluster.
I0930 10:36:58.482035    7606 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (26.55s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-656644 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-656644 cache add registry.k8s.io/pause:3.1: (1.108533877s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-656644 cache add registry.k8s.io/pause:3.3: (1.058555298s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-656644 /tmp/TestFunctionalserialCacheCmdcacheadd_local1644666558/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cache add minikube-local-cache-test:functional-656644
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cache delete minikube-local-cache-test:functional-656644
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-656644
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (274.545963ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 kubectl -- --context functional-656644 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-656644 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656644 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-656644 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.673350991s)
functional_test.go:761: restart took 42.673450984s for "functional-656644" cluster.
I0930 10:37:47.741916    7606 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (42.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-656644 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.16s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-656644 logs: (1.15552295s)
--- PASS: TestFunctional/serial/LogsCmd (1.16s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 logs --file /tmp/TestFunctionalserialLogsFileCmd2682350151/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-656644 logs --file /tmp/TestFunctionalserialLogsFileCmd2682350151/001/logs.txt: (1.165455656s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.47s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-656644 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-656644
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-656644: exit status 115 (372.0616ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31148 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-656644 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.47s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 config get cpus: exit status 14 (76.666362ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 config get cpus: exit status 14 (73.261794ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-656644 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-656644 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 56318: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656644 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-656644 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (172.148577ms)

                                                
                                                
-- stdout --
	* [functional-656644] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:38:34.229833   56021 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:38:34.229968   56021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:38:34.229977   56021 out.go:358] Setting ErrFile to fd 2...
	I0930 10:38:34.229983   56021 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:38:34.230214   56021 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	I0930 10:38:34.230549   56021 out.go:352] Setting JSON to false
	I0930 10:38:34.231490   56021 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1263,"bootTime":1727691452,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0930 10:38:34.231581   56021 start.go:139] virtualization:  
	I0930 10:38:34.233973   56021 out.go:177] * [functional-656644] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0930 10:38:34.235568   56021 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:38:34.235701   56021 notify.go:220] Checking for updates...
	I0930 10:38:34.239376   56021 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:38:34.241533   56021 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	I0930 10:38:34.243647   56021 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	I0930 10:38:34.245322   56021 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:38:34.247247   56021 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:38:34.249665   56021 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:38:34.250211   56021 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:38:34.281047   56021 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:38:34.281174   56021 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:38:34.340997   56021 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:38:34.331898342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:38:34.341099   56021 docker.go:318] overlay module found
	I0930 10:38:34.343372   56021 out.go:177] * Using the docker driver based on existing profile
	I0930 10:38:34.345437   56021 start.go:297] selected driver: docker
	I0930 10:38:34.345453   56021 start.go:901] validating driver "docker" against &{Name:functional-656644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-656644 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:38:34.345573   56021 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:38:34.348340   56021 out.go:201] 
	W0930 10:38:34.350544   56021 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0930 10:38:34.352518   56021 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656644 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656644 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-656644 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (172.710609ms)

                                                
                                                
-- stdout --
	* [functional-656644] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:38:34.633127   56148 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:38:34.633256   56148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:38:34.633268   56148 out.go:358] Setting ErrFile to fd 2...
	I0930 10:38:34.633273   56148 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:38:34.634264   56148 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	I0930 10:38:34.634653   56148 out.go:352] Setting JSON to false
	I0930 10:38:34.635643   56148 start.go:129] hostinfo: {"hostname":"ip-172-31-30-239","uptime":1263,"bootTime":1727691452,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0930 10:38:34.635715   56148 start.go:139] virtualization:  
	I0930 10:38:34.638158   56148 out.go:177] * [functional-656644] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0930 10:38:34.640735   56148 out.go:177]   - MINIKUBE_LOCATION=19734
	I0930 10:38:34.640873   56148 notify.go:220] Checking for updates...
	I0930 10:38:34.645048   56148 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0930 10:38:34.647073   56148 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	I0930 10:38:34.649070   56148 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	I0930 10:38:34.651178   56148 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0930 10:38:34.653061   56148 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0930 10:38:34.655327   56148 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:38:34.655960   56148 driver.go:394] Setting default libvirt URI to qemu:///system
	I0930 10:38:34.681132   56148 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0930 10:38:34.681250   56148 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:38:34.733736   56148 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-30 10:38:34.724116379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:38:34.733855   56148 docker.go:318] overlay module found
	I0930 10:38:34.736372   56148 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0930 10:38:34.738487   56148 start.go:297] selected driver: docker
	I0930 10:38:34.738502   56148 start.go:901] validating driver "docker" against &{Name:functional-656644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1727108449-19696@sha256:c662152d8855bc4c62a3b5786a68adf99e04794e7f8f374a3859703004ef1d21 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-656644 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0930 10:38:34.738600   56148 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0930 10:38:34.741079   56148 out.go:201] 
	W0930 10:38:34.743030   56148 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0930 10:38:34.745467   56148 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-656644 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-656644 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-rp5hg" [64ec950e-33ca-4805-98b4-d0000c3772cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-rp5hg" [64ec950e-33ca-4805-98b4-d0000c3772cf] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.003873368s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:32510
functional_test.go:1675: http://192.168.49.2:32510: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-rp5hg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32510
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [da94e119-2432-4ad8-9ed3-0838d5f41f70] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003650123s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-656644 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-656644 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-656644 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-656644 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [61d8fbfa-b680-4a8a-a184-02f0a9550ee2] Pending
helpers_test.go:344: "sp-pod" [61d8fbfa-b680-4a8a-a184-02f0a9550ee2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [61d8fbfa-b680-4a8a-a184-02f0a9550ee2] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.003880947s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-656644 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-656644 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-656644 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ff36766f-e8ee-45b9-9127-04b653f95a7e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ff36766f-e8ee-45b9-9127-04b653f95a7e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00445751s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-656644 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh -n functional-656644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cp functional-656644:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd768247670/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh -n functional-656644 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh -n functional-656644 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7606/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo cat /etc/test/nested/copy/7606/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7606.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo cat /etc/ssl/certs/7606.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7606.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo cat /usr/share/ca-certificates/7606.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/76062.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo cat /etc/ssl/certs/76062.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/76062.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo cat /usr/share/ca-certificates/76062.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-656644 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 ssh "sudo systemctl is-active crio": exit status 1 (338.409162ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-656644 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-656644
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-656644
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656644 image ls --format short --alsologtostderr:
I0930 10:38:41.380921   56795 out.go:345] Setting OutFile to fd 1 ...
I0930 10:38:41.381086   56795 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:41.381096   56795 out.go:358] Setting ErrFile to fd 2...
I0930 10:38:41.381102   56795 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:41.381437   56795 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
I0930 10:38:41.382355   56795 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:41.382505   56795 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:41.383235   56795 cli_runner.go:164] Run: docker container inspect functional-656644 --format={{.State.Status}}
I0930 10:38:41.401547   56795 ssh_runner.go:195] Run: systemctl --version
I0930 10:38:41.401598   56795 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656644
I0930 10:38:41.427726   56795 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/functional-656644/id_rsa Username:docker}
I0930 10:38:41.523821   56795 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-656644 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| registry.k8s.io/pause                       | 3.3               | 3d18732f8686c | 484kB  |
| registry.k8s.io/echoserver-arm              | 1.8               | 72565bf5bbedf | 85MB   |
| localhost/my-image                          | functional-656644 | b368cb5d95ed5 | 1.41MB |
| registry.k8s.io/kube-apiserver              | v1.31.1           | d3f53a98c0a9d | 91.6MB |
| registry.k8s.io/kube-proxy                  | v1.31.1           | 24a140c548c07 | 94.7MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | a422e0e982356 | 42.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 1611cd07b61d5 | 3.55MB |
| docker.io/library/nginx                     | alpine            | b887aca7aed61 | 47MB   |
| docker.io/library/nginx                     | latest            | 6e8672ddd037e | 193MB  |
| registry.k8s.io/pause                       | 3.10              | afb61768ce381 | 514kB  |
| registry.k8s.io/etcd                        | 3.5.15-0          | 27e3830e14027 | 139MB  |
| docker.io/kubernetesui/dashboard            | <none>            | 20b332c9a70d8 | 244MB  |
| docker.io/kicbase/echo-server               | functional-656644 | ce2d2cda2d858 | 4.78MB |
| registry.k8s.io/kube-scheduler              | v1.31.1           | 7f8aa378bb47d | 66MB   |
| registry.k8s.io/kube-controller-manager     | v1.31.1           | 279f381cb3736 | 85.9MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | 2f6c962e7b831 | 60.2MB |
| registry.k8s.io/pause                       | latest            | 8cb2091f603e7 | 240kB  |
| docker.io/library/minikube-local-cache-test | functional-656644 | 7b6f7e4467825 | 30B    |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | ba04bb24b9575 | 29MB   |
| registry.k8s.io/pause                       | 3.1               | 8057e0500773a | 525kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656644 image ls --format table --alsologtostderr:
I0930 10:38:45.338805   57224 out.go:345] Setting OutFile to fd 1 ...
I0930 10:38:45.339025   57224 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:45.339037   57224 out.go:358] Setting ErrFile to fd 2...
I0930 10:38:45.339043   57224 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:45.339464   57224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
I0930 10:38:45.340154   57224 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:45.340323   57224 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:45.340829   57224 cli_runner.go:164] Run: docker container inspect functional-656644 --format={{.State.Status}}
I0930 10:38:45.357635   57224 ssh_runner.go:195] Run: systemctl --version
I0930 10:38:45.357689   57224 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656644
I0930 10:38:45.374363   57224 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/functional-656644/id_rsa Username:docker}
I0930 10:38:45.463600   57224 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-656644 image ls --format json --alsologtostderr:
[{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139000000"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"244000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":[],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"85000000"},{"id":"b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47000000"},{"id":"6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb942
58c2536e4cf2","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"193000000"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"514000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-656644"],"size":"4780000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"b368cb5d95ed50c57569930071042054159d0786ca8543d7da714995a95ba92a","repoDigests":[],"repoTags":["localhost/my-image:functional-656644"],"size":"1410000"},{"id":"7b6f7e4467825535c86d62bb74a62b70bc4545165409958ffec157e29d1db817","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-656644"],"size":"30"},{"id":"7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":[],"repoTags":["registry.k8s.
io/kube-scheduler:v1.31.1"],"size":"66000000"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"42300000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"91600000"},{"id":"279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"85900000"},{"id":"24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":
"94700000"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"60200000"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656644 image ls --format json --alsologtostderr:
I0930 10:38:45.094477   57160 out.go:345] Setting OutFile to fd 1 ...
I0930 10:38:45.094674   57160 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:45.094688   57160 out.go:358] Setting ErrFile to fd 2...
I0930 10:38:45.094694   57160 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:45.095055   57160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
I0930 10:38:45.095862   57160 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:45.096000   57160 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:45.096558   57160 cli_runner.go:164] Run: docker container inspect functional-656644 --format={{.State.Status}}
I0930 10:38:45.116538   57160 ssh_runner.go:195] Run: systemctl --version
I0930 10:38:45.116601   57160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656644
I0930 10:38:45.144997   57160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/functional-656644/id_rsa Username:docker}
I0930 10:38:45.243904   57160 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-656644 image ls --format yaml --alsologtostderr:
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "94700000"
- id: 6e8672ddd037e6078cad0c819d331972e2a0c8e2aee506fcb94258c2536e4cf2
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "193000000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-656644
size: "4780000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests: []
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "85000000"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139000000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 7b6f7e4467825535c86d62bb74a62b70bc4545165409958ffec157e29d1db817
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-656644
size: "30"
- id: 7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "66000000"
- id: 279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "85900000"
- id: b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47000000"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "60200000"
- id: d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "91600000"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "514000"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "244000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656644 image ls --format yaml --alsologtostderr:
I0930 10:38:41.641391   56861 out.go:345] Setting OutFile to fd 1 ...
I0930 10:38:41.641541   56861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:41.641562   56861 out.go:358] Setting ErrFile to fd 2...
I0930 10:38:41.641580   56861 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:41.641915   56861 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
I0930 10:38:41.642870   56861 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:41.643047   56861 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:41.644147   56861 cli_runner.go:164] Run: docker container inspect functional-656644 --format={{.State.Status}}
I0930 10:38:41.667466   56861 ssh_runner.go:195] Run: systemctl --version
I0930 10:38:41.667643   56861 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656644
I0930 10:38:41.685424   56861 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/functional-656644/id_rsa Username:docker}
I0930 10:38:41.792412   56861 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 ssh pgrep buildkitd: exit status 1 (331.535247ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image build -t localhost/my-image:functional-656644 testdata/build --alsologtostderr
2024/09/30 10:38:44 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-656644 image build -t localhost/my-image:functional-656644 testdata/build --alsologtostderr: (2.875670667s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656644 image build -t localhost/my-image:functional-656644 testdata/build --alsologtostderr:
I0930 10:38:42.256830   56969 out.go:345] Setting OutFile to fd 1 ...
I0930 10:38:42.257072   56969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:42.257094   56969 out.go:358] Setting ErrFile to fd 2...
I0930 10:38:42.257119   56969 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0930 10:38:42.257438   56969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
I0930 10:38:42.258135   56969 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:42.259457   56969 config.go:182] Loaded profile config "functional-656644": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
I0930 10:38:42.260345   56969 cli_runner.go:164] Run: docker container inspect functional-656644 --format={{.State.Status}}
I0930 10:38:42.281692   56969 ssh_runner.go:195] Run: systemctl --version
I0930 10:38:42.281743   56969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656644
I0930 10:38:42.317035   56969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/functional-656644/id_rsa Username:docker}
I0930 10:38:42.468558   56969 build_images.go:161] Building image from path: /tmp/build.3840314466.tar
I0930 10:38:42.468643   56969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0930 10:38:42.516812   56969 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3840314466.tar
I0930 10:38:42.524636   56969 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3840314466.tar: stat -c "%s %y" /var/lib/minikube/build/build.3840314466.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3840314466.tar': No such file or directory
I0930 10:38:42.524662   56969 ssh_runner.go:362] scp /tmp/build.3840314466.tar --> /var/lib/minikube/build/build.3840314466.tar (3072 bytes)
I0930 10:38:42.559643   56969 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3840314466
I0930 10:38:42.569850   56969 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3840314466 -xf /var/lib/minikube/build/build.3840314466.tar
I0930 10:38:42.582500   56969 docker.go:360] Building image: /var/lib/minikube/build/build.3840314466
I0930 10:38:42.582634   56969 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-656644 /var/lib/minikube/build/build.3840314466
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.5s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b368cb5d95ed50c57569930071042054159d0786ca8543d7da714995a95ba92a done
#8 naming to localhost/my-image:functional-656644 done
#8 DONE 0.1s
I0930 10:38:45.017476   56969 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-656644 /var/lib/minikube/build/build.3840314466: (2.434800535s)
I0930 10:38:45.017551   56969 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3840314466
I0930 10:38:45.029407   56969 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3840314466.tar
I0930 10:38:45.039857   56969 build_images.go:217] Built localhost/my-image:functional-656644 from /tmp/build.3840314466.tar
I0930 10:38:45.039892   56969 build_images.go:133] succeeded building to: functional-656644
I0930 10:38:45.039898   56969 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-656644
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image load --daemon kicbase/echo-server:functional-656644 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:499: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-656644 docker-env) && out/minikube-linux-arm64 status -p functional-656644"
functional_test.go:522: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-656644 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image load --daemon kicbase/echo-server:functional-656644 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.90s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-656644
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image load --daemon kicbase/echo-server:functional-656644 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image save kicbase/echo-server:functional-656644 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image rm kicbase/echo-server:functional-656644 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-656644 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-656644 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-fndhq" [ca0b34e3-25ba-4113-991f-886a80ed8c52] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-fndhq" [ca0b34e3-25ba-4113-991f-886a80ed8c52] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.004772025s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-656644
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 image save --daemon kicbase/echo-server:functional-656644 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-656644
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-656644 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-656644 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-656644 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 52411: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-656644 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-656644 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-656644 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [7bdc881b-05bb-434b-a059-b51903445f5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [7bdc881b-05bb-434b-a059-b51903445f5f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004477605s
I0930 10:38:12.518076    7606 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.47s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 service list -o json
functional_test.go:1494: Took "322.278192ms" to run "out/minikube-linux-arm64 -p functional-656644 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31987
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-656644 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.196.128 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-656644 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31987
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "437.713647ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "76.340382ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "336.149705ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "54.90406ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdany-port2581389105/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727692703659261678" to /tmp/TestFunctionalparallelMountCmdany-port2581389105/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727692703659261678" to /tmp/TestFunctionalparallelMountCmdany-port2581389105/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727692703659261678" to /tmp/TestFunctionalparallelMountCmdany-port2581389105/001/test-1727692703659261678
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (309.499324ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:38:23.969754    7606 retry.go:31] will retry after 613.86073ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 30 10:38 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 30 10:38 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 30 10:38 test-1727692703659261678
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh cat /mount-9p/test-1727692703659261678
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-656644 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [26e0e26b-9ba9-48a8-82a0-1d9f00b81b55] Pending
helpers_test.go:344: "busybox-mount" [26e0e26b-9ba9-48a8-82a0-1d9f00b81b55] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [26e0e26b-9ba9-48a8-82a0-1d9f00b81b55] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [26e0e26b-9ba9-48a8-82a0-1d9f00b81b55] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003729185s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-656644 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdany-port2581389105/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.93s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdspecific-port2331356237/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (434.969752ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:38:31.022210    7606 retry.go:31] will retry after 272.555567ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdspecific-port2331356237/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 ssh "sudo umount -f /mount-9p": exit status 1 (308.345802ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-656644 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdspecific-port2331356237/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup971937557/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup971937557/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup971937557/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T" /mount1: exit status 1 (584.098204ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0930 10:38:33.042367    7606 retry.go:31] will retry after 327.891374ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-656644 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-656644 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup971937557/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup971937557/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656644 /tmp/TestFunctionalparallelMountCmdVerifyCleanup971937557/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-656644
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-656644
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-656644
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (123.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-734182 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0930 10:39:18.898615    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:18.905324    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:18.916637    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:18.937971    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:18.979302    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:19.060664    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:19.222080    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:19.543905    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:20.185817    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:21.467874    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:24.029270    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:29.150872    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:39.392748    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:39:59.874150    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:40:40.836674    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-734182 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m2.527967435s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (123.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-734182 -- rollout status deployment/busybox: (4.657952001s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-5s877 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-hql9p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-rtfqx -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-5s877 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-hql9p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-rtfqx -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-5s877 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-hql9p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-rtfqx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-5s877 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-5s877 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-hql9p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-hql9p -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-rtfqx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-734182 -- exec busybox-7dff88458-rtfqx -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-734182 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-734182 -v=7 --alsologtostderr: (21.742049016s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-734182 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp testdata/cp-test.txt ha-734182:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3642189122/001/cp-test_ha-734182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182:/home/docker/cp-test.txt ha-734182-m02:/home/docker/cp-test_ha-734182_ha-734182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m02 "sudo cat /home/docker/cp-test_ha-734182_ha-734182-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182:/home/docker/cp-test.txt ha-734182-m03:/home/docker/cp-test_ha-734182_ha-734182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m03 "sudo cat /home/docker/cp-test_ha-734182_ha-734182-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182:/home/docker/cp-test.txt ha-734182-m04:/home/docker/cp-test_ha-734182_ha-734182-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m04 "sudo cat /home/docker/cp-test_ha-734182_ha-734182-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp testdata/cp-test.txt ha-734182-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3642189122/001/cp-test_ha-734182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m02:/home/docker/cp-test.txt ha-734182:/home/docker/cp-test_ha-734182-m02_ha-734182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182 "sudo cat /home/docker/cp-test_ha-734182-m02_ha-734182.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m02:/home/docker/cp-test.txt ha-734182-m03:/home/docker/cp-test_ha-734182-m02_ha-734182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m03 "sudo cat /home/docker/cp-test_ha-734182-m02_ha-734182-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m02:/home/docker/cp-test.txt ha-734182-m04:/home/docker/cp-test_ha-734182-m02_ha-734182-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m04 "sudo cat /home/docker/cp-test_ha-734182-m02_ha-734182-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp testdata/cp-test.txt ha-734182-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3642189122/001/cp-test_ha-734182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m03:/home/docker/cp-test.txt ha-734182:/home/docker/cp-test_ha-734182-m03_ha-734182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182 "sudo cat /home/docker/cp-test_ha-734182-m03_ha-734182.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m03:/home/docker/cp-test.txt ha-734182-m02:/home/docker/cp-test_ha-734182-m03_ha-734182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m02 "sudo cat /home/docker/cp-test_ha-734182-m03_ha-734182-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m03:/home/docker/cp-test.txt ha-734182-m04:/home/docker/cp-test_ha-734182-m03_ha-734182-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m04 "sudo cat /home/docker/cp-test_ha-734182-m03_ha-734182-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp testdata/cp-test.txt ha-734182-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3642189122/001/cp-test_ha-734182-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m04:/home/docker/cp-test.txt ha-734182:/home/docker/cp-test_ha-734182-m04_ha-734182.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182 "sudo cat /home/docker/cp-test_ha-734182-m04_ha-734182.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m04:/home/docker/cp-test.txt ha-734182-m02:/home/docker/cp-test_ha-734182-m04_ha-734182-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m02 "sudo cat /home/docker/cp-test_ha-734182-m04_ha-734182-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 cp ha-734182-m04:/home/docker/cp-test.txt ha-734182-m03:/home/docker/cp-test_ha-734182-m04_ha-734182-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 ssh -n ha-734182-m03 "sudo cat /home/docker/cp-test_ha-734182-m04_ha-734182-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-734182 node stop m02 -v=7 --alsologtostderr: (10.984867612s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr: exit status 7 (743.926815ms)

                                                
                                                
-- stdout --
	ha-734182
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-734182-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-734182-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-734182-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:41:54.022801   79379 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:41:54.023047   79379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:41:54.023086   79379 out.go:358] Setting ErrFile to fd 2...
	I0930 10:41:54.023109   79379 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:41:54.023594   79379 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	I0930 10:41:54.024293   79379 out.go:352] Setting JSON to false
	I0930 10:41:54.024326   79379 mustload.go:65] Loading cluster: ha-734182
	I0930 10:41:54.024753   79379 config.go:182] Loaded profile config "ha-734182": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:41:54.024773   79379 status.go:174] checking status of ha-734182 ...
	I0930 10:41:54.025324   79379 cli_runner.go:164] Run: docker container inspect ha-734182 --format={{.State.Status}}
	I0930 10:41:54.025662   79379 notify.go:220] Checking for updates...
	I0930 10:41:54.045303   79379 status.go:364] ha-734182 host status = "Running" (err=<nil>)
	I0930 10:41:54.045327   79379 host.go:66] Checking if "ha-734182" exists ...
	I0930 10:41:54.045631   79379 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-734182
	I0930 10:41:54.089072   79379 host.go:66] Checking if "ha-734182" exists ...
	I0930 10:41:54.089403   79379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:41:54.089448   79379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-734182
	I0930 10:41:54.108827   79379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/ha-734182/id_rsa Username:docker}
	I0930 10:41:54.211168   79379 ssh_runner.go:195] Run: systemctl --version
	I0930 10:41:54.215935   79379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:41:54.230193   79379 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:41:54.292709   79379 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-30 10:41:54.282393379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:41:54.293372   79379 kubeconfig.go:125] found "ha-734182" server: "https://192.168.49.254:8443"
	I0930 10:41:54.293403   79379 api_server.go:166] Checking apiserver status ...
	I0930 10:41:54.293444   79379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:41:54.304999   79379 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0930 10:41:54.314254   79379 api_server.go:182] apiserver freezer: "2:freezer:/docker/a091607acb1b8822d5fbcb97a7463fcd8c296327e5f5b31f8d4e5b7e1d78b798/kubepods/burstable/pod75b9b400e8e96c948bf07cfc9c620d03/cf7b2c4991304b53c122f268daefcbbbaf0fd02645b95e048324923c2ed86761"
	I0930 10:41:54.314326   79379 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a091607acb1b8822d5fbcb97a7463fcd8c296327e5f5b31f8d4e5b7e1d78b798/kubepods/burstable/pod75b9b400e8e96c948bf07cfc9c620d03/cf7b2c4991304b53c122f268daefcbbbaf0fd02645b95e048324923c2ed86761/freezer.state
	I0930 10:41:54.324378   79379 api_server.go:204] freezer state: "THAWED"
	I0930 10:41:54.324407   79379 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0930 10:41:54.333944   79379 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0930 10:41:54.333973   79379 status.go:456] ha-734182 apiserver status = Running (err=<nil>)
	I0930 10:41:54.333983   79379 status.go:176] ha-734182 status: &{Name:ha-734182 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:41:54.334028   79379 status.go:174] checking status of ha-734182-m02 ...
	I0930 10:41:54.334367   79379 cli_runner.go:164] Run: docker container inspect ha-734182-m02 --format={{.State.Status}}
	I0930 10:41:54.350536   79379 status.go:364] ha-734182-m02 host status = "Stopped" (err=<nil>)
	I0930 10:41:54.350558   79379 status.go:377] host is not running, skipping remaining checks
	I0930 10:41:54.350565   79379 status.go:176] ha-734182-m02 status: &{Name:ha-734182-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:41:54.350584   79379 status.go:174] checking status of ha-734182-m03 ...
	I0930 10:41:54.350900   79379 cli_runner.go:164] Run: docker container inspect ha-734182-m03 --format={{.State.Status}}
	I0930 10:41:54.370533   79379 status.go:364] ha-734182-m03 host status = "Running" (err=<nil>)
	I0930 10:41:54.370559   79379 host.go:66] Checking if "ha-734182-m03" exists ...
	I0930 10:41:54.370861   79379 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-734182-m03
	I0930 10:41:54.387988   79379 host.go:66] Checking if "ha-734182-m03" exists ...
	I0930 10:41:54.388293   79379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:41:54.388348   79379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-734182-m03
	I0930 10:41:54.406705   79379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/ha-734182-m03/id_rsa Username:docker}
	I0930 10:41:54.496672   79379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:41:54.513359   79379 kubeconfig.go:125] found "ha-734182" server: "https://192.168.49.254:8443"
	I0930 10:41:54.513391   79379 api_server.go:166] Checking apiserver status ...
	I0930 10:41:54.513456   79379 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:41:54.526484   79379 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2119/cgroup
	I0930 10:41:54.535674   79379 api_server.go:182] apiserver freezer: "2:freezer:/docker/6ebad3ff31b058ea71905186e5fc093a3592821f4cd4bf40b090c2f376303d60/kubepods/burstable/pod9a93b079df3e3518116345f97a38d7ab/b9a76904691fb60de875fa23063994d95e56be75a5a01309d7643b141adaccc9"
	I0930 10:41:54.535745   79379 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6ebad3ff31b058ea71905186e5fc093a3592821f4cd4bf40b090c2f376303d60/kubepods/burstable/pod9a93b079df3e3518116345f97a38d7ab/b9a76904691fb60de875fa23063994d95e56be75a5a01309d7643b141adaccc9/freezer.state
	I0930 10:41:54.544625   79379 api_server.go:204] freezer state: "THAWED"
	I0930 10:41:54.544657   79379 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0930 10:41:54.552363   79379 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0930 10:41:54.552394   79379 status.go:456] ha-734182-m03 apiserver status = Running (err=<nil>)
	I0930 10:41:54.552404   79379 status.go:176] ha-734182-m03 status: &{Name:ha-734182-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:41:54.552449   79379 status.go:174] checking status of ha-734182-m04 ...
	I0930 10:41:54.552780   79379 cli_runner.go:164] Run: docker container inspect ha-734182-m04 --format={{.State.Status}}
	I0930 10:41:54.569230   79379 status.go:364] ha-734182-m04 host status = "Running" (err=<nil>)
	I0930 10:41:54.569254   79379 host.go:66] Checking if "ha-734182-m04" exists ...
	I0930 10:41:54.569548   79379 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-734182-m04
	I0930 10:41:54.587376   79379 host.go:66] Checking if "ha-734182-m04" exists ...
	I0930 10:41:54.587852   79379 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:41:54.587898   79379 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-734182-m04
	I0930 10:41:54.610231   79379 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/ha-734182-m04/id_rsa Username:docker}
	I0930 10:41:54.700701   79379 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:41:54.712475   79379 status.go:176] ha-734182-m04 status: &{Name:ha-734182-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (54.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 node start m02 -v=7 --alsologtostderr
E0930 10:42:02.759691    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-734182 node start m02 -v=7 --alsologtostderr: (53.439152947s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (54.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.025212443s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (178.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-734182 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-734182 -v=7 --alsologtostderr
E0930 10:43:00.511389    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:00.517740    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:00.529181    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:00.550613    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:00.591967    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:00.673293    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:00.834682    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:01.156398    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:01.798310    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:03.079668    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:05.641026    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:10.762816    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:43:21.004084    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-734182 -v=7 --alsologtostderr: (33.948151284s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-734182 --wait=true -v=7 --alsologtostderr
E0930 10:43:41.485490    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:44:18.898109    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:44:22.447286    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:44:46.601409    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:45:44.368864    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-734182 --wait=true -v=7 --alsologtostderr: (2m24.88778388s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-734182
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (178.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-734182 node delete m03 -v=7 --alsologtostderr: (10.343720677s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-734182 stop -v=7 --alsologtostderr: (32.493966719s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr: exit status 7 (96.790034ms)

                                                
                                                
-- stdout --
	ha-734182
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-734182-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-734182-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:46:34.422206  105763 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:46:34.422766  105763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:46:34.422779  105763 out.go:358] Setting ErrFile to fd 2...
	I0930 10:46:34.422785  105763 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:46:34.423019  105763 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	I0930 10:46:34.423204  105763 out.go:352] Setting JSON to false
	I0930 10:46:34.423233  105763 mustload.go:65] Loading cluster: ha-734182
	I0930 10:46:34.423691  105763 config.go:182] Loaded profile config "ha-734182": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:46:34.423712  105763 status.go:174] checking status of ha-734182 ...
	I0930 10:46:34.423966  105763 notify.go:220] Checking for updates...
	I0930 10:46:34.424248  105763 cli_runner.go:164] Run: docker container inspect ha-734182 --format={{.State.Status}}
	I0930 10:46:34.441330  105763 status.go:364] ha-734182 host status = "Stopped" (err=<nil>)
	I0930 10:46:34.441352  105763 status.go:377] host is not running, skipping remaining checks
	I0930 10:46:34.441358  105763 status.go:176] ha-734182 status: &{Name:ha-734182 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:46:34.441386  105763 status.go:174] checking status of ha-734182-m02 ...
	I0930 10:46:34.441685  105763 cli_runner.go:164] Run: docker container inspect ha-734182-m02 --format={{.State.Status}}
	I0930 10:46:34.456941  105763 status.go:364] ha-734182-m02 host status = "Stopped" (err=<nil>)
	I0930 10:46:34.456960  105763 status.go:377] host is not running, skipping remaining checks
	I0930 10:46:34.456966  105763 status.go:176] ha-734182-m02 status: &{Name:ha-734182-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:46:34.456984  105763 status.go:174] checking status of ha-734182-m04 ...
	I0930 10:46:34.457277  105763 cli_runner.go:164] Run: docker container inspect ha-734182-m04 --format={{.State.Status}}
	I0930 10:46:34.476337  105763 status.go:364] ha-734182-m04 host status = "Stopped" (err=<nil>)
	I0930 10:46:34.476368  105763 status.go:377] host is not running, skipping remaining checks
	I0930 10:46:34.476375  105763 status.go:176] ha-734182-m04 status: &{Name:ha-734182-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (166.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-734182 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0930 10:48:00.510600    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:48:28.211980    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 10:49:18.898198    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-734182 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: (2m45.993318075s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (166.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-734182 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-734182 --control-plane -v=7 --alsologtostderr: (43.338933822s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-734182 status -v=7 --alsologtostderr: (1.014153121s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.001479234s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.00s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (29.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-985489 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-985489 --driver=docker  --container-runtime=docker: (29.45645902s)
--- PASS: TestImageBuild/serial/Setup (29.46s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-985489
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-985489: (1.949064258s)
--- PASS: TestImageBuild/serial/NormalBuild (1.95s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.99s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-985489
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.99s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-985489
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.81s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.84s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-985489
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (75.24s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-850593 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-850593 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m15.238556012s)
--- PASS: TestJSONOutput/start/Command (75.24s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.54s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-850593 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.5s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-850593 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.50s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.76s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-850593 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-850593 --output=json --user=testUser: (5.764646147s)
--- PASS: TestJSONOutput/stop/Command (5.76s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-795704 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-795704 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.756264ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3a816f09-789c-4710-a3ae-33f8eb8e808f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-795704] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9d151132-165d-41a1-a197-87033683f66a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"1b59cff1-ce6d-4046-9fa3-e00b745a810a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f5ca15fd-aec5-4ca4-89cd-69fce1cd5377","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig"}}
	{"specversion":"1.0","id":"56684099-d3cd-4ccf-8474-6b9e588630a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube"}}
	{"specversion":"1.0","id":"8c875675-7eb4-4900-a817-e36d71fea6d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bf51d877-7304-4721-a745-a09e5faa96a0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"77337dbc-c524-4207-9417-1857b2d7e141","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-795704" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-795704
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-447168 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-447168 --network=: (32.209411332s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-447168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-447168
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-447168: (2.000087502s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.23s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.64s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-789296 --network=bridge
E0930 10:53:00.510464    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-789296 --network=bridge: (28.614604932s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-789296" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-789296
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-789296: (1.993212637s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.64s)

                                                
                                    
x
+
TestKicExistingNetwork (31.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0930 10:53:22.013369    7606 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0930 10:53:22.028952    7606 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0930 10:53:22.029048    7606 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0930 10:53:22.029072    7606 cli_runner.go:164] Run: docker network inspect existing-network
W0930 10:53:22.044613    7606 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0930 10:53:22.044643    7606 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0930 10:53:22.044659    7606 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0930 10:53:22.044771    7606 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0930 10:53:22.060873    7606 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4711e3caed76 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:51:b2:8d:71} reservation:<nil>}
I0930 10:53:22.062013    7606 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001d0b030}
I0930 10:53:22.062047    7606 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0930 10:53:22.062101    7606 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0930 10:53:22.136227    7606 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-251535 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-251535 --network=existing-network: (29.639582477s)
helpers_test.go:175: Cleaning up "existing-network-251535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-251535
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-251535: (1.862197671s)
I0930 10:53:53.660574    7606 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (31.66s)

                                                
                                    
x
+
TestKicCustomSubnet (33.71s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-306627 --subnet=192.168.60.0/24
E0930 10:54:18.898173    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-306627 --subnet=192.168.60.0/24: (31.629087266s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-306627 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-306627" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-306627
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-306627: (2.062529096s)
--- PASS: TestKicCustomSubnet (33.71s)

                                                
                                    
x
+
TestKicStaticIP (31.59s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-567741 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-567741 --static-ip=192.168.200.200: (29.415501531s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-567741 ip
helpers_test.go:175: Cleaning up "static-ip-567741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-567741
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-567741: (2.027180543s)
--- PASS: TestKicStaticIP (31.59s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (69.73s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-103783 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-103783 --driver=docker  --container-runtime=docker: (33.239236204s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-106499 --driver=docker  --container-runtime=docker
E0930 10:55:41.963584    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-106499 --driver=docker  --container-runtime=docker: (31.148594217s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-103783
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-106499
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-106499" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-106499
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-106499: (2.046115174s)
helpers_test.go:175: Cleaning up "first-103783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-103783
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-103783: (2.016917988s)
--- PASS: TestMinikubeProfile (69.73s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.77s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-958742 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-958742 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.765435493s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-958742 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.18s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-960736 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-960736 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.181043061s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.18s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-960736 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-958742 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-958742 --alsologtostderr -v=5: (1.471848903s)
--- PASS: TestMountStart/serial/DeleteFirst (1.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-960736 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-960736
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-960736: (1.200534646s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.03s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-960736
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-960736: (7.025716032s)
--- PASS: TestMountStart/serial/RestartStopped (8.03s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-960736 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (80.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809629 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-809629 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (1m20.080336013s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (80.61s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (43.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- rollout status deployment/busybox
E0930 10:58:00.510543    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-809629 -- rollout status deployment/busybox: (5.1900223s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:58:03.172081    7606 retry.go:31] will retry after 863.913061ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:58:04.180064    7606 retry.go:31] will retry after 1.279599783s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:58:05.599536    7606 retry.go:31] will retry after 1.230560426s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:58:06.968022    7606 retry.go:31] will retry after 3.911566614s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:58:11.016789    7606 retry.go:31] will retry after 6.642531626s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:58:17.792295    7606 retry.go:31] will retry after 6.269512843s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0930 10:58:24.209471    7606 retry.go:31] will retry after 15.390620623s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-4q5k6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-rgkxn -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-4q5k6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-rgkxn -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-4q5k6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-rgkxn -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (43.81s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-4q5k6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-4q5k6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-rgkxn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-809629 -- exec busybox-7dff88458-rgkxn -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.99s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (17.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-809629 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-809629 -v 3 --alsologtostderr: (16.801936999s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (17.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-809629 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp testdata/cp-test.txt multinode-809629:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2625146297/001/cp-test_multinode-809629.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629:/home/docker/cp-test.txt multinode-809629-m02:/home/docker/cp-test_multinode-809629_multinode-809629-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m02 "sudo cat /home/docker/cp-test_multinode-809629_multinode-809629-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629:/home/docker/cp-test.txt multinode-809629-m03:/home/docker/cp-test_multinode-809629_multinode-809629-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m03 "sudo cat /home/docker/cp-test_multinode-809629_multinode-809629-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp testdata/cp-test.txt multinode-809629-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2625146297/001/cp-test_multinode-809629-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629-m02:/home/docker/cp-test.txt multinode-809629:/home/docker/cp-test_multinode-809629-m02_multinode-809629.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629 "sudo cat /home/docker/cp-test_multinode-809629-m02_multinode-809629.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629-m02:/home/docker/cp-test.txt multinode-809629-m03:/home/docker/cp-test_multinode-809629-m02_multinode-809629-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m03 "sudo cat /home/docker/cp-test_multinode-809629-m02_multinode-809629-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp testdata/cp-test.txt multinode-809629-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2625146297/001/cp-test_multinode-809629-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629-m03:/home/docker/cp-test.txt multinode-809629:/home/docker/cp-test_multinode-809629-m03_multinode-809629.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629 "sudo cat /home/docker/cp-test_multinode-809629-m03_multinode-809629.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 cp multinode-809629-m03:/home/docker/cp-test.txt multinode-809629-m02:/home/docker/cp-test_multinode-809629-m03_multinode-809629-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 ssh -n multinode-809629-m02 "sudo cat /home/docker/cp-test_multinode-809629-m03_multinode-809629-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.51s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-809629 node stop m03: (1.210847665s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-809629 status: exit status 7 (496.189017ms)

                                                
                                                
-- stdout --
	multinode-809629
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-809629-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-809629-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-809629 status --alsologtostderr: exit status 7 (502.76734ms)

                                                
                                                
-- stdout --
	multinode-809629
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-809629-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-809629-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 10:59:11.912622  181807 out.go:345] Setting OutFile to fd 1 ...
	I0930 10:59:11.912772  181807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:59:11.912781  181807 out.go:358] Setting ErrFile to fd 2...
	I0930 10:59:11.912787  181807 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 10:59:11.913027  181807 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	I0930 10:59:11.913210  181807 out.go:352] Setting JSON to false
	I0930 10:59:11.913247  181807 mustload.go:65] Loading cluster: multinode-809629
	I0930 10:59:11.913325  181807 notify.go:220] Checking for updates...
	I0930 10:59:11.914438  181807 config.go:182] Loaded profile config "multinode-809629": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 10:59:11.914467  181807 status.go:174] checking status of multinode-809629 ...
	I0930 10:59:11.915133  181807 cli_runner.go:164] Run: docker container inspect multinode-809629 --format={{.State.Status}}
	I0930 10:59:11.937670  181807 status.go:364] multinode-809629 host status = "Running" (err=<nil>)
	I0930 10:59:11.937695  181807 host.go:66] Checking if "multinode-809629" exists ...
	I0930 10:59:11.938002  181807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-809629
	I0930 10:59:11.962586  181807 host.go:66] Checking if "multinode-809629" exists ...
	I0930 10:59:11.962881  181807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:59:11.962933  181807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-809629
	I0930 10:59:11.980197  181807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/multinode-809629/id_rsa Username:docker}
	I0930 10:59:12.068684  181807 ssh_runner.go:195] Run: systemctl --version
	I0930 10:59:12.073050  181807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:59:12.084847  181807 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0930 10:59:12.149112  181807 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-30 10:59:12.139716673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: brid
ge-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7]] Warnings:<nil>}}
	I0930 10:59:12.149738  181807 kubeconfig.go:125] found "multinode-809629" server: "https://192.168.67.2:8443"
	I0930 10:59:12.149771  181807 api_server.go:166] Checking apiserver status ...
	I0930 10:59:12.149824  181807 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0930 10:59:12.160960  181807 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2248/cgroup
	I0930 10:59:12.169973  181807 api_server.go:182] apiserver freezer: "2:freezer:/docker/959619ed0f5318828bf3be00a3a4aea4551c419f5798bb83989cfabda129da15/kubepods/burstable/pod0aee294d1a2f86d7051dd644f819f465/7c6e8a8ec70a3d908af7c10c57aef602bc1247195335137f534b26b2767e4969"
	I0930 10:59:12.170044  181807 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/959619ed0f5318828bf3be00a3a4aea4551c419f5798bb83989cfabda129da15/kubepods/burstable/pod0aee294d1a2f86d7051dd644f819f465/7c6e8a8ec70a3d908af7c10c57aef602bc1247195335137f534b26b2767e4969/freezer.state
	I0930 10:59:12.178607  181807 api_server.go:204] freezer state: "THAWED"
	I0930 10:59:12.178639  181807 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0930 10:59:12.187534  181807 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0930 10:59:12.187602  181807 status.go:456] multinode-809629 apiserver status = Running (err=<nil>)
	I0930 10:59:12.187613  181807 status.go:176] multinode-809629 status: &{Name:multinode-809629 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:59:12.187631  181807 status.go:174] checking status of multinode-809629-m02 ...
	I0930 10:59:12.187947  181807 cli_runner.go:164] Run: docker container inspect multinode-809629-m02 --format={{.State.Status}}
	I0930 10:59:12.203633  181807 status.go:364] multinode-809629-m02 host status = "Running" (err=<nil>)
	I0930 10:59:12.203660  181807 host.go:66] Checking if "multinode-809629-m02" exists ...
	I0930 10:59:12.203963  181807 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-809629-m02
	I0930 10:59:12.220526  181807 host.go:66] Checking if "multinode-809629-m02" exists ...
	I0930 10:59:12.220843  181807 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0930 10:59:12.220897  181807 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-809629-m02
	I0930 10:59:12.236641  181807 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/19734-2285/.minikube/machines/multinode-809629-m02/id_rsa Username:docker}
	I0930 10:59:12.328376  181807 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0930 10:59:12.340145  181807 status.go:176] multinode-809629-m02 status: &{Name:multinode-809629-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0930 10:59:12.340181  181807 status.go:174] checking status of multinode-809629-m03 ...
	I0930 10:59:12.340489  181807 cli_runner.go:164] Run: docker container inspect multinode-809629-m03 --format={{.State.Status}}
	I0930 10:59:12.362974  181807 status.go:364] multinode-809629-m03 host status = "Stopped" (err=<nil>)
	I0930 10:59:12.362995  181807 status.go:377] host is not running, skipping remaining checks
	I0930 10:59:12.363002  181807 status.go:176] multinode-809629-m03 status: &{Name:multinode-809629-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.21s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 node start m03 -v=7 --alsologtostderr
E0930 10:59:18.898022    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-809629 node start m03 -v=7 --alsologtostderr: (10.113010474s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.87s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (99.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-809629
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-809629
E0930 10:59:23.574012    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-809629: (22.54593681s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809629 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-809629 --wait=true -v=8 --alsologtostderr: (1m17.10908533s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-809629
--- PASS: TestMultiNode/serial/RestartKeepsNodes (99.77s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-809629 node delete m03: (4.909556574s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.54s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-809629 stop: (21.331910968s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-809629 status: exit status 7 (95.486993ms)

                                                
                                                
-- stdout --
	multinode-809629
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-809629-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-809629 status --alsologtostderr: exit status 7 (86.62471ms)

                                                
                                                
-- stdout --
	multinode-809629
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-809629-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0930 11:01:30.024664  195343 out.go:345] Setting OutFile to fd 1 ...
	I0930 11:01:30.024884  195343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:01:30.024911  195343 out.go:358] Setting ErrFile to fd 2...
	I0930 11:01:30.024929  195343 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0930 11:01:30.025236  195343 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19734-2285/.minikube/bin
	I0930 11:01:30.025531  195343 out.go:352] Setting JSON to false
	I0930 11:01:30.025602  195343 mustload.go:65] Loading cluster: multinode-809629
	I0930 11:01:30.025635  195343 notify.go:220] Checking for updates...
	I0930 11:01:30.026149  195343 config.go:182] Loaded profile config "multinode-809629": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
	I0930 11:01:30.026195  195343 status.go:174] checking status of multinode-809629 ...
	I0930 11:01:30.027261  195343 cli_runner.go:164] Run: docker container inspect multinode-809629 --format={{.State.Status}}
	I0930 11:01:30.047433  195343 status.go:364] multinode-809629 host status = "Stopped" (err=<nil>)
	I0930 11:01:30.047456  195343 status.go:377] host is not running, skipping remaining checks
	I0930 11:01:30.047463  195343 status.go:176] multinode-809629 status: &{Name:multinode-809629 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0930 11:01:30.047502  195343 status.go:174] checking status of multinode-809629-m02 ...
	I0930 11:01:30.047900  195343 cli_runner.go:164] Run: docker container inspect multinode-809629-m02 --format={{.State.Status}}
	I0930 11:01:30.065386  195343 status.go:364] multinode-809629-m02 host status = "Stopped" (err=<nil>)
	I0930 11:01:30.065411  195343 status.go:377] host is not running, skipping remaining checks
	I0930 11:01:30.065420  195343 status.go:176] multinode-809629-m02 status: &{Name:multinode-809629-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.51s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809629 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-809629 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (49.263874102s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-809629 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.93s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-809629
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809629-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-809629-m02 --driver=docker  --container-runtime=docker: exit status 14 (86.842244ms)

                                                
                                                
-- stdout --
	* [multinode-809629-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-809629-m02' is duplicated with machine name 'multinode-809629-m02' in profile 'multinode-809629'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-809629-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-809629-m03 --driver=docker  --container-runtime=docker: (32.150597294s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-809629
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-809629: exit status 80 (399.099318ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-809629 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-809629-m03 already exists in multinode-809629-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-809629-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-809629-m03: (2.076802748s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.77s)

                                                
                                    
x
+
TestPreload (136.38s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-681573 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0930 11:03:00.511085    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:04:18.897687    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-681573 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (1m41.126115624s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-681573 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-681573 image pull gcr.io/k8s-minikube/busybox: (2.2262465s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-681573
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-681573: (10.74493753s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-681573 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-681573 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (19.838625459s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-681573 image list
helpers_test.go:175: Cleaning up "test-preload-681573" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-681573
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-681573: (2.160675187s)
--- PASS: TestPreload (136.38s)

                                                
                                    
x
+
TestScheduledStopUnix (107.05s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-501994 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-501994 --memory=2048 --driver=docker  --container-runtime=docker: (33.957782861s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-501994 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-501994 -n scheduled-stop-501994
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-501994 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0930 11:05:49.349331    7606 retry.go:31] will retry after 95.689µs: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.350831    7606 retry.go:31] will retry after 208.849µs: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.354913    7606 retry.go:31] will retry after 311.914µs: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.355990    7606 retry.go:31] will retry after 208.633µs: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.357104    7606 retry.go:31] will retry after 571.17µs: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.358187    7606 retry.go:31] will retry after 775.623µs: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.359297    7606 retry.go:31] will retry after 946.404µs: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.360363    7606 retry.go:31] will retry after 1.591296ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.362537    7606 retry.go:31] will retry after 1.737524ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.364715    7606 retry.go:31] will retry after 2.679661ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.367895    7606 retry.go:31] will retry after 6.924359ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.375089    7606 retry.go:31] will retry after 12.030803ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.387242    7606 retry.go:31] will retry after 11.850149ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.399444    7606 retry.go:31] will retry after 15.084177ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.414627    7606 retry.go:31] will retry after 14.989801ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
I0930 11:05:49.429862    7606 retry.go:31] will retry after 52.998107ms: open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/scheduled-stop-501994/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-501994 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-501994 -n scheduled-stop-501994
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-501994
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-501994 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-501994
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-501994: exit status 7 (72.363624ms)

                                                
                                                
-- stdout --
	scheduled-stop-501994
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-501994 -n scheduled-stop-501994
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-501994 -n scheduled-stop-501994: exit status 7 (71.235647ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-501994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-501994
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-501994: (1.627923072s)
--- PASS: TestScheduledStopUnix (107.05s)

                                                
                                    
x
+
TestSkaffold (112.72s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe885124250 version
skaffold_test.go:63: skaffold version: v2.13.2
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-942200 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-942200 --memory=2600 --driver=docker  --container-runtime=docker: (29.153760054s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe885124250 run --minikube-profile skaffold-942200 --kube-context skaffold-942200 --status-check=true --port-forward=false --interactive=false
E0930 11:08:00.510609    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe885124250 run --minikube-profile skaffold-942200 --kube-context skaffold-942200 --status-check=true --port-forward=false --interactive=false: (1m7.717951389s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-64b4448f95-qds92" [34e687db-37e3-446f-920f-716861e38a68] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003667215s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-5c49d86987-wnv72" [f79cdf4d-8326-4dbe-8271-6c8aaea1ce5f] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 6.003435937s
helpers_test.go:175: Cleaning up "skaffold-942200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-942200
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-942200: (2.759023196s)
--- PASS: TestSkaffold (112.72s)

                                                
                                    
x
+
TestInsufficientStorage (10.36s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-683292 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-683292 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (8.172146949s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"97c27c86-643b-47e9-a207-5242063e1893","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-683292] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8188e11a-991c-4935-81b2-c524677888a2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19734"}}
	{"specversion":"1.0","id":"937ccd8f-579d-4866-aabc-51ffd1edb558","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0323042d-d2e0-4ac0-925d-78b5f7fab00a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig"}}
	{"specversion":"1.0","id":"bed328eb-e2ac-4900-bc4b-ae797742d651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube"}}
	{"specversion":"1.0","id":"0d72501f-e820-4857-8e6f-d41439d4b1e4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"bd03989b-0c3d-49c0-bbcb-3c8008c04bee","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70acd64c-26db-4a32-b1c8-784d9c34f719","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"855a11a0-e334-4cd5-a608-e89ec0e80d4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"974a648d-aee2-404d-9c74-377dea1269a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"97741bc0-0307-4be9-b108-9b391be2570f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"09fdbbe0-1330-4f73-8418-fc0e17faf31e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-683292\" primary control-plane node in \"insufficient-storage-683292\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1608cd79-e978-4b4f-9fe6-e8124b09f5d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1727108449-19696 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d9882757-a2a3-4f77-ba61-b67957e77b78","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e8c09d47-2692-4a99-948b-f7a265e2ddd7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-683292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-683292 --output=json --layout=cluster: exit status 7 (262.741844ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-683292","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-683292","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:09:03.133193  229544 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-683292" does not appear in /home/jenkins/minikube-integration/19734-2285/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-683292 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-683292 --output=json --layout=cluster: exit status 7 (264.468381ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-683292","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-683292","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0930 11:09:03.398731  229606 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-683292" does not appear in /home/jenkins/minikube-integration/19734-2285/kubeconfig
	E0930 11:09:03.408614  229606 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/insufficient-storage-683292/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-683292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-683292
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-683292: (1.661399591s)
--- PASS: TestInsufficientStorage (10.36s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (76.35s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.1213336292 start -p running-upgrade-879191 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0930 11:15:01.867661    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.1213336292 start -p running-upgrade-879191 --memory=2200 --vm-driver=docker  --container-runtime=docker: (38.885472189s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-879191 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-879191 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.598829695s)
helpers_test.go:175: Cleaning up "running-upgrade-879191" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-879191
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-879191: (2.261823602s)
--- PASS: TestRunningBinaryUpgrade (76.35s)

                                                
                                    
x
+
TestKubernetesUpgrade (383.31s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-795131 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-795131 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m1.811605801s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-795131
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-795131: (10.74375144s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-795131 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-795131 status --format={{.Host}}: exit status 7 (64.659137ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-795131 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0930 11:13:00.510456    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-795131 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m38.094930179s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-795131 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-795131 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-795131 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (148.471154ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-795131] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-795131
	    minikube start -p kubernetes-upgrade-795131 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7951312 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-795131 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-795131 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-795131 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (29.874552844s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-795131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-795131
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-795131: (2.45981428s)
--- PASS: TestKubernetesUpgrade (383.31s)

                                                
                                    
x
+
TestMissingContainerUpgrade (163.62s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.578384405 start -p missing-upgrade-838171 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.578384405 start -p missing-upgrade-838171 --memory=2200 --driver=docker  --container-runtime=docker: (1m27.846852718s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-838171
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-838171: (10.440400094s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-838171
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-838171 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0930 11:12:21.965491    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-838171 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m2.231651846s)
helpers_test.go:175: Cleaning up "missing-upgrade-838171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-838171
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-838171: (2.323384039s)
--- PASS: TestMissingContainerUpgrade (163.62s)

                                                
                                    
x
+
TestPause/serial/Start (88.46s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-385704 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0930 11:09:18.897614    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-385704 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m28.463286687s)
--- PASS: TestPause/serial/Start (88.46s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (30.32s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-385704 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-385704 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (30.291825342s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (30.32s)

                                                
                                    
x
+
TestPause/serial/Pause (0.75s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-385704 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.75s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.36s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-385704 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-385704 --output=json --layout=cluster: exit status 2 (362.330054ms)

                                                
                                                
-- stdout --
	{"Name":"pause-385704","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-385704","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.36s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-385704 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.09s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-385704 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-385704 --alsologtostderr -v=5: (1.085408679s)
--- PASS: TestPause/serial/PauseAgain (1.09s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.96s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-385704 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-385704 --alsologtostderr -v=5: (2.956542483s)
--- PASS: TestPause/serial/DeletePaused (2.96s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.11s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-385704
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-385704: exit status 1 (13.63384ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-385704: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (83.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3334404225 start -p stopped-upgrade-157886 --memory=2200 --vm-driver=docker  --container-runtime=docker
E0930 11:13:39.930367    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:39.936809    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:39.948335    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:39.969757    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:40.011188    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:40.092621    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:40.254208    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:40.576123    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:41.218118    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:42.499592    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:45.060932    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:13:50.182738    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3334404225 start -p stopped-upgrade-157886 --memory=2200 --vm-driver=docker  --container-runtime=docker: (37.476291982s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3334404225 -p stopped-upgrade-157886 stop
E0930 11:14:00.424032    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3334404225 -p stopped-upgrade-157886 stop: (10.874402032s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-157886 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0930 11:14:18.897706    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:14:20.906002    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-157886 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (34.712731801s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (83.06s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-157886
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-157886: (2.000660668s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-894640 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-894640 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (75.366336ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-894640] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19734
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19734-2285/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19734-2285/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-894640 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-894640 --driver=docker  --container-runtime=docker: (40.9309518s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-894640 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-894640 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-894640 --no-kubernetes --driver=docker  --container-runtime=docker: (14.2709138s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-894640 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-894640 status -o json: exit status 2 (348.544764ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-894640","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-894640
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-894640: (1.844451685s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (10.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-894640 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-894640 --no-kubernetes --driver=docker  --container-runtime=docker: (10.171324742s)
--- PASS: TestNoKubernetes/serial/Start (10.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-894640 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-894640 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.681799ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-894640
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-894640: (1.257491689s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-894640 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-894640 --driver=docker  --container-runtime=docker: (7.53955254s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.54s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-894640 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-894640 "sudo systemctl is-active --quiet service kubelet": exit status 1 (345.501398ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (162.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-736991 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-736991 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m42.51629814s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (162.52s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-938623 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-938623 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (56.825361116s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-736991 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c0848add-2fe3-48e8-a05c-0da8c7d18688] Pending
helpers_test.go:344: "busybox" [c0848add-2fe3-48e8-a05c-0da8c7d18688] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c0848add-2fe3-48e8-a05c-0da8c7d18688] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004245445s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-736991 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.05s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-736991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-736991 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.824700013s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-736991 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-736991 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-736991 --alsologtostderr -v=3: (11.085917945s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736991 -n old-k8s-version-736991
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736991 -n old-k8s-version-736991: exit status 7 (98.937106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-736991 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (369.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-736991 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0930 11:23:00.510869    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-736991 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (6m8.715217457s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-736991 -n old-k8s-version-736991
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (369.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-938623 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d177da5e-89ab-42c1-b214-a43c8de05a1f] Pending
helpers_test.go:344: "busybox" [d177da5e-89ab-42c1-b214-a43c8de05a1f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d177da5e-89ab-42c1-b214-a43c8de05a1f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.008814771s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-938623 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-938623 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-938623 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-938623 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-938623 --alsologtostderr -v=3: (10.934258826s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-938623 -n no-preload-938623
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-938623 -n no-preload-938623: exit status 7 (66.115489ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-938623 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-938623 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:23:39.930757    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:24:18.897955    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-938623 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m25.998325824s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-938623 -n no-preload-938623
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tvxfx" [b30aadb6-ff23-49ae-a6cb-514f2f563672] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003174333s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-tvxfx" [b30aadb6-ff23-49ae-a6cb-514f2f563672] Running
E0930 11:28:00.510499    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003190838s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-938623 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-938623 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-938623 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-938623 -n no-preload-938623
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-938623 -n no-preload-938623: exit status 2 (314.420866ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-938623 -n no-preload-938623
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-938623 -n no-preload-938623: exit status 2 (315.117928ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-938623 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-938623 -n no-preload-938623
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-938623 -n no-preload-938623
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.82s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (51.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-176501 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:28:39.930707    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-176501 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (51.301653007s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (51.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-cwqgc" [c05882fb-012a-4b8a-964d-5b57bdf55009] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003456485s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-cwqgc" [c05882fb-012a-4b8a-964d-5b57bdf55009] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004433351s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-736991 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-736991 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-736991 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736991 -n old-k8s-version-736991
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736991 -n old-k8s-version-736991: exit status 2 (308.265976ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-736991 -n old-k8s-version-736991
E0930 11:29:01.967073    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-736991 -n old-k8s-version-736991: exit status 2 (337.579274ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-736991 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-736991 -n old-k8s-version-736991
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-736991 -n old-k8s-version-736991
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-176501 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [826642eb-6365-4c6f-ad1d-690e09086ad0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [826642eb-6365-4c6f-ad1d-690e09086ad0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.004101438s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-176501 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.44s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-929188 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-929188 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (1m21.117770467s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (81.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-176501 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-176501 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.164456546s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-176501 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-176501 --alsologtostderr -v=3
E0930 11:29:18.897534    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-176501 --alsologtostderr -v=3: (11.181227122s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-176501 -n embed-certs-176501
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-176501 -n embed-certs-176501: exit status 7 (113.125475ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-176501 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (270.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-176501 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:30:02.993712    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-176501 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m30.431324039s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-176501 -n embed-certs-176501
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (270.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-929188 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [ea0ac6a9-ebdc-4ae7-bc69-455d77c51bf2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [ea0ac6a9-ebdc-4ae7-bc69-455d77c51bf2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003153632s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-929188 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-929188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-929188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.001127468s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-929188 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-929188 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-929188 --alsologtostderr -v=3: (10.801943817s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188: exit status 7 (74.705949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-929188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-929188 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:32:16.393385    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:16.399748    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:16.411095    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:16.432436    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:16.473831    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:16.555209    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:16.716718    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:17.038388    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:17.680556    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:18.962315    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:21.524503    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:26.646721    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:36.888746    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:43.579925    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:32:57.370088    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:00.511520    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:05.367404    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:05.373750    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:05.385176    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:05.406490    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:05.447797    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:05.529231    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:05.690750    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:06.012757    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:06.654950    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:07.936771    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:10.498329    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:15.621294    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:25.863251    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:38.332165    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:39.930559    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:33:46.345532    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-929188 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (4m26.398637054s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (266.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x2r77" [81bae926-5848-4fc0-9459-7c61c1a86bf7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003398629s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-x2r77" [81bae926-5848-4fc0-9459-7c61c1a86bf7] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00364442s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-176501 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-176501 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-176501 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-176501 -n embed-certs-176501
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-176501 -n embed-certs-176501: exit status 2 (312.89522ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-176501 -n embed-certs-176501
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-176501 -n embed-certs-176501: exit status 2 (295.02587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-176501 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-176501 -n embed-certs-176501
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-176501 -n embed-certs-176501
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-827524 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
E0930 11:34:18.897626    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:34:27.307610    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-827524 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (39.631683754s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-827524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-827524 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.028164473s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.03s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.15s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-827524 --alsologtostderr -v=3
E0930 11:35:00.254186    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-827524 --alsologtostderr -v=3: (11.147235228s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.15s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-827524 -n newest-cni-827524
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-827524 -n newest-cni-827524: exit status 7 (61.394494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-827524 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (18.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-827524 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-827524 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.31.1: (18.034543813s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-827524 -n newest-cni-827524
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (18.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kncfs" [45f42c96-ebb3-4673-bee6-911badedff84] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003572274s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-kncfs" [45f42c96-ebb3-4673-bee6-911badedff84] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005074759s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-929188 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-827524 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-827524 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-827524 -n newest-cni-827524
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-827524 -n newest-cni-827524: exit status 2 (300.684297ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-827524 -n newest-cni-827524
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-827524 -n newest-cni-827524: exit status 2 (310.380792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-827524 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-827524 -n newest-cni-827524
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-827524 -n newest-cni-827524
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-929188 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.57s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-929188 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188: exit status 2 (386.730617ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188: exit status 2 (372.889439ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-929188 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-929188 -n default-k8s-diff-port-929188
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.57s)
E0930 11:43:00.511320    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:05.367459    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:10.034660    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:11.517173    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:16.313476    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:16.319891    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:16.331236    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:16.352604    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:16.394070    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:16.475464    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:16.637017    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:16.959019    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:17.600693    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:18.882935    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:21.444333    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (54.787864891s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0930 11:35:49.230193    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m11.131343214s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-824253 "pgrep -a kubelet"
I0930 11:36:25.555842    7606 config.go:182] Loaded profile config "auto-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-824253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-c6cj5" [19d70b84-8dc6-410b-b976-22bbb27de0ee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-c6cj5" [19d70b84-8dc6-410b-b976-22bbb27de0ee] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.004088179s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-824253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-88nss" [94d6e1fd-751c-4f1b-a103-44aa2694d3cf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003888447s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-824253 "pgrep -a kubelet"
I0930 11:36:54.440247    7606 config.go:182] Loaded profile config "kindnet-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-824253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9bwxg" [cc8b6f9d-579d-47d8-873e-f9c2432be537] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9bwxg" [cc8b6f9d-579d-47d8-873e-f9c2432be537] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.00381247s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (73.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m13.889869545s)
--- PASS: TestNetworkPlugins/group/calico/Start (73.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-824253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0930 11:37:44.098733    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/old-k8s-version-736991/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:38:00.511221    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/functional-656644/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:38:05.367793    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m3.62850151s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-9sw6v" [33d54a99-64af-49fb-a013-2e537f2956bb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005353384s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-824253 "pgrep -a kubelet"
I0930 11:38:22.703892    7606 config.go:182] Loaded profile config "calico-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (13.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-824253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2tdmf" [8d36e173-a51e-4880-b8d2-a1dc593aa00e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2tdmf" [8d36e173-a51e-4880-b8d2-a1dc593aa00e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 13.004203345s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (13.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-824253 "pgrep -a kubelet"
I0930 11:38:32.558960    7606 config.go:182] Loaded profile config "custom-flannel-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-824253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-t76vn" [ee620152-afdc-44dc-8f98-1b17779f474f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0930 11:38:33.071897    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/no-preload-938623/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-t76vn" [ee620152-afdc-44dc-8f98-1b17779f474f] Running
E0930 11:38:39.930492    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004324681s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-824253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-824253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (88.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m28.149235792s)
--- PASS: TestNetworkPlugins/group/false/Start (88.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0930 11:39:18.898160    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/addons-703944/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m16.562859407s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-824253 "pgrep -a kubelet"
I0930 11:40:26.364936    7606 config.go:182] Loaded profile config "enable-default-cni-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-824253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-w4wtw" [fd7d0f7d-d1cf-4fa9-a35f-e5f89ad806f4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0930 11:40:27.656269    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:27.662648    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:27.674009    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:27.695419    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:27.736820    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:27.818733    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:27.980269    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:28.302261    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:40:28.944150    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-w4wtw" [fd7d0f7d-d1cf-4fa9-a35f-e5f89ad806f4] Running
E0930 11:40:32.786972    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.005996629s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-824253 "pgrep -a kubelet"
I0930 11:40:30.020887    7606 config.go:182] Loaded profile config "false-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-824253 replace --force -f testdata/netcat-deployment.yaml
E0930 11:40:30.225657    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-r5s7q" [badee272-a58c-4259-bc50-0c7e1a964562] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-r5s7q" [badee272-a58c-4259-bc50-0c7e1a964562] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 11.003364353s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-824253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-824253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (59.936460373s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0930 11:41:08.633049    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:25.902239    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:25.908535    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:25.919826    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:25.941129    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:25.982458    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:26.063793    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:26.225224    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:26.547032    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:27.188734    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:28.470400    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:31.032584    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:36.154280    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:46.395502    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:48.096253    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:48.102545    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:48.113876    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:48.135209    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:48.176416    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:48.257755    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:48.419161    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:48.740984    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:49.383043    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:49.594569    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/default-k8s-diff-port-929188/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:50.664652    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:53.226858    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:41:58.349088    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m14.795891599s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.80s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gqppw" [61efca29-c4e4-4a68-be22-dd1dd083bb8a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00429158s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-824253 "pgrep -a kubelet"
I0930 11:42:05.517261    7606 config.go:182] Loaded profile config "flannel-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-824253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-jhblm" [6737384d-67d4-4f3c-b3bc-4121f0118fa9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0930 11:42:06.876824    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:42:08.590834    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-jhblm" [6737384d-67d4-4f3c-b3bc-4121f0118fa9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.0041055s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-824253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-824253 "pgrep -a kubelet"
I0930 11:42:21.236982    7606 config.go:182] Loaded profile config "bridge-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-824253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-wcpkx" [e95bfc08-18ea-4597-92eb-1d0b9bf93b67] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-wcpkx" [e95bfc08-18ea-4597-92eb-1d0b9bf93b67] Running
E0930 11:42:29.072995    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/kindnet-824253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.00436416s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-824253 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (46.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0930 11:42:47.839344    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/auto-824253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-824253 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (46.535737816s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (46.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-824253 "pgrep -a kubelet"
I0930 11:43:24.881983    7606 config.go:182] Loaded profile config "kubenet-824253": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-824253 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g5hz2" [cdbd005f-c6ca-4723-afbe-fbb1614ce6fb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0930 11:43:26.565743    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-g5hz2" [cdbd005f-c6ca-4723-afbe-fbb1614ce6fb] Running
E0930 11:43:32.798704    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:32.805053    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:32.816427    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:32.837939    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:32.879284    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:32.960745    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:33.122112    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:33.443626    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:34.085052    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:35.367360    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:36.807331    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/calico-824253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.003449907s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (16.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-824253 exec deployment/netcat -- nslookup kubernetes.default
E0930 11:43:37.929102    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:39.929845    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/skaffold-942200/client.crt: no such file or directory" logger="UnhandledError"
E0930 11:43:43.050715    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:175: (dbg) Non-zero exit: kubectl --context kubenet-824253 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (15.196155878s)

                                                
                                                
-- stdout --
	;; connection timed out; no servers could be reached
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0930 11:43:52.317899    7606 retry.go:31] will retry after 817.177132ms: exit status 1
net_test.go:175: (dbg) Run:  kubectl --context kubenet-824253 exec deployment/netcat -- nslookup kubernetes.default
E0930 11:43:53.292290    7606 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/custom-flannel-824253/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/kubenet/DNS (16.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-824253 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)

                                                
                                    

Test skip (23/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.51s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-398252 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-398252" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-398252
--- SKIP: TestDownloadOnlyKic (0.51s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-586855" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-586855
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-824253 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-824253" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19734-2285/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:17:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: NoKubernetes-894640
contexts:
- context:
cluster: NoKubernetes-894640
extensions:
- extension:
last-update: Mon, 30 Sep 2024 11:17:18 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: NoKubernetes-894640
name: NoKubernetes-894640
current-context: ""
kind: Config
preferences: {}
users:
- name: NoKubernetes-894640
user:
client-certificate: /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/NoKubernetes-894640/client.crt
client-key: /home/jenkins/minikube-integration/19734-2285/.minikube/profiles/NoKubernetes-894640/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-824253

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-824253" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-824253"

                                                
                                                
----------------------- debugLogs end: cilium-824253 [took: 4.357730379s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-824253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-824253
--- SKIP: TestNetworkPlugins/group/cilium (4.50s)

                                                
                                    
Copied to clipboard