Test Report: Docker_Linux_containerd_arm64 19576

                    
                      2e9b50ac88536491e648f1503809a6b59d99d481:2024-09-06:36104
                    
                

Test fail (2/328)

Order failed test Duration
29 TestAddons/serial/Volcano 200.09
111 TestFunctional/parallel/License 0.18
x
+
TestAddons/serial/Volcano (200.09s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:897: volcano-scheduler stabilized in 59.493267ms
addons_test.go:905: volcano-admission stabilized in 60.130334ms
addons_test.go:913: volcano-controller stabilized in 60.281246ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-2mczk" [da8b8868-ea9c-4b63-b515-4cb9bcf96635] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004004163s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-mx4tr" [13b51014-a423-4242-8282-a2ee9c29f945] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003715308s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-47pmz" [1ed7d265-940b-4553-8eef-9f3ce761ac67] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004045737s
addons_test.go:932: (dbg) Run:  kubectl --context addons-663433 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-663433 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-663433 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [f0b30973-453e-4754-93d6-449a5c812433] Pending
helpers_test.go:344: "test-job-nginx-0" [f0b30973-453e-4754-93d6-449a5c812433] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-663433 -n addons-663433
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-06 18:36:39.987457451 +0000 UTC m=+435.688958850
addons_test.go:964: (dbg) Run:  kubectl --context addons-663433 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-663433 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-2a055d2a-b868-4f31-b66a-ced277958126
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h48w4 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-h48w4:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-663433 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-663433 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-663433
helpers_test.go:235: (dbg) docker inspect addons-663433:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1cd649a6ff5c712445baaceea43501134091b4386c7fe1fc3d4ae11d2b050efb",
	        "Created": "2024-09-06T18:30:07.875530732Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 8905,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-06T18:30:08.082055754Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:8411aacd61cb8f2a7ae48c92e2c9e76ad4dd701b3dba8b30393c5cc31fbd2b15",
	        "ResolvConfPath": "/var/lib/docker/containers/1cd649a6ff5c712445baaceea43501134091b4386c7fe1fc3d4ae11d2b050efb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1cd649a6ff5c712445baaceea43501134091b4386c7fe1fc3d4ae11d2b050efb/hostname",
	        "HostsPath": "/var/lib/docker/containers/1cd649a6ff5c712445baaceea43501134091b4386c7fe1fc3d4ae11d2b050efb/hosts",
	        "LogPath": "/var/lib/docker/containers/1cd649a6ff5c712445baaceea43501134091b4386c7fe1fc3d4ae11d2b050efb/1cd649a6ff5c712445baaceea43501134091b4386c7fe1fc3d4ae11d2b050efb-json.log",
	        "Name": "/addons-663433",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-663433:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-663433",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/d756fe605aafcbfe051a7838871e0220c00da88d3b6df733663b3637a5bee48a-init/diff:/var/lib/docker/overlay2/e1d41880879a75cd3e4a60c31d9b14f7f93644b90678ee4622bd30dc9907c43e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d756fe605aafcbfe051a7838871e0220c00da88d3b6df733663b3637a5bee48a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d756fe605aafcbfe051a7838871e0220c00da88d3b6df733663b3637a5bee48a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d756fe605aafcbfe051a7838871e0220c00da88d3b6df733663b3637a5bee48a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-663433",
	                "Source": "/var/lib/docker/volumes/addons-663433/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-663433",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-663433",
	                "name.minikube.sigs.k8s.io": "addons-663433",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ae3abf49aa458085cb77be314b7f19fcec51c7ca7e9bf628961642b495ff965a",
	            "SandboxKey": "/var/run/docker/netns/ae3abf49aa45",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-663433": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "f5b1465c61bcfb7e7d8b51d351ab3022cb8a016cf72d3aa7ae06bfd23d2e791f",
	                    "EndpointID": "863feb2ed178f774fb442281c4426e8d6814d8862193d482604f91abfbf27a5b",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-663433",
	                        "1cd649a6ff5c"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-663433 -n addons-663433
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-663433 logs -n 25: (1.698741387s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-998007   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-998007              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-998007              | download-only-998007   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only              | download-only-582754   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-582754              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-582754              | download-only-582754   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-998007              | download-only-998007   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-582754              | download-only-582754   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                   | download-docker-126447 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | download-docker-126447               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-126447            | download-docker-126447 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | --download-only -p                   | binary-mirror-873447   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | binary-mirror-873447                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36941               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-873447              | binary-mirror-873447   | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| addons  | enable dashboard -p                  | addons-663433          | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-663433                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-663433          | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | addons-663433                        |                        |         |         |                     |                     |
	| start   | -p addons-663433 --wait=true         | addons-663433          | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:33 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:42
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:42.093459    8404 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:42.093612    8404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:42.093623    8404 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:42.093628    8404 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:42.093866    8404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 18:29:42.094340    8404 out.go:352] Setting JSON to false
	I0906 18:29:42.095159    8404 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":730,"bootTime":1725646652,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0906 18:29:42.095239    8404 start.go:139] virtualization:  
	I0906 18:29:42.100267    8404 out.go:177] * [addons-663433] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0906 18:29:42.102886    8404 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:29:42.102995    8404 notify.go:220] Checking for updates...
	I0906 18:29:42.109061    8404 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:42.111306    8404 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	I0906 18:29:42.113842    8404 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	I0906 18:29:42.116861    8404 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 18:29:42.119189    8404 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:29:42.121505    8404 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:42.150038    8404 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:29:42.150179    8404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:42.232221    8404 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-06 18:29:42.219963405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:42.232567    8404 docker.go:318] overlay module found
	I0906 18:29:42.235185    8404 out.go:177] * Using the docker driver based on user configuration
	I0906 18:29:42.237478    8404 start.go:297] selected driver: docker
	I0906 18:29:42.237507    8404 start.go:901] validating driver "docker" against <nil>
	I0906 18:29:42.237523    8404 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:29:42.238259    8404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:42.300354    8404 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-06 18:29:42.290727786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:42.300561    8404 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:42.300835    8404 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:29:42.302889    8404 out.go:177] * Using Docker driver with root privileges
	I0906 18:29:42.304930    8404 cni.go:84] Creating CNI manager for ""
	I0906 18:29:42.304957    8404 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0906 18:29:42.304968    8404 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:42.305061    8404 start.go:340] cluster config:
	{Name:addons-663433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-663433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 G
PUs: AutoPauseInterval:1m0s}
	I0906 18:29:42.307443    8404 out.go:177] * Starting "addons-663433" primary control-plane node in "addons-663433" cluster
	I0906 18:29:42.309447    8404 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0906 18:29:42.312203    8404 out.go:177] * Pulling base image v0.0.45 ...
	I0906 18:29:42.314542    8404 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0906 18:29:42.314624    8404 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19576-2243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0906 18:29:42.314638    8404 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:42.314646    8404 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0906 18:29:42.314724    8404 preload.go:172] Found /home/jenkins/minikube-integration/19576-2243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0906 18:29:42.314735    8404 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0906 18:29:42.315117    8404 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/config.json ...
	I0906 18:29:42.315216    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/config.json: {Name:mke55920259ad4aebc8752aa90343945cda0d586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:29:42.331888    8404 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0906 18:29:42.332019    8404 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0906 18:29:42.332044    8404 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0906 18:29:42.332051    8404 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0906 18:29:42.332064    8404 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0906 18:29:42.332076    8404 cache.go:162] Loading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from local cache
	I0906 18:29:59.517129    8404 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 from cached tarball
	I0906 18:29:59.517180    8404 cache.go:194] Successfully downloaded all kic artifacts
	I0906 18:29:59.517217    8404 start.go:360] acquireMachinesLock for addons-663433: {Name:mkc0ab1738c29a3ee40a550d2b671e1276cd2098 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0906 18:29:59.517401    8404 start.go:364] duration metric: took 166.172µs to acquireMachinesLock for "addons-663433"
	I0906 18:29:59.517428    8404 start.go:93] Provisioning new machine with config: &{Name:addons-663433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-663433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePa
th: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0906 18:29:59.517506    8404 start.go:125] createHost starting for "" (driver="docker")
	I0906 18:29:59.519622    8404 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0906 18:29:59.519859    8404 start.go:159] libmachine.API.Create for "addons-663433" (driver="docker")
	I0906 18:29:59.519893    8404 client.go:168] LocalClient.Create starting
	I0906 18:29:59.520005    8404 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca.pem
	I0906 18:29:59.940671    8404 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/cert.pem
	I0906 18:30:00.936325    8404 cli_runner.go:164] Run: docker network inspect addons-663433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0906 18:30:00.953014    8404 cli_runner.go:211] docker network inspect addons-663433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0906 18:30:00.953123    8404 network_create.go:284] running [docker network inspect addons-663433] to gather additional debugging logs...
	I0906 18:30:00.953149    8404 cli_runner.go:164] Run: docker network inspect addons-663433
	W0906 18:30:00.968349    8404 cli_runner.go:211] docker network inspect addons-663433 returned with exit code 1
	I0906 18:30:00.968388    8404 network_create.go:287] error running [docker network inspect addons-663433]: docker network inspect addons-663433: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-663433 not found
	I0906 18:30:00.968404    8404 network_create.go:289] output of [docker network inspect addons-663433]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-663433 not found
	
	** /stderr **
	I0906 18:30:00.968534    8404 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 18:30:00.985722    8404 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4000563df0}
	I0906 18:30:00.985767    8404 network_create.go:124] attempt to create docker network addons-663433 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0906 18:30:00.985828    8404 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-663433 addons-663433
	I0906 18:30:01.101697    8404 network_create.go:108] docker network addons-663433 192.168.49.0/24 created
	I0906 18:30:01.101740    8404 kic.go:121] calculated static IP "192.168.49.2" for the "addons-663433" container
	I0906 18:30:01.101819    8404 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0906 18:30:01.116925    8404 cli_runner.go:164] Run: docker volume create addons-663433 --label name.minikube.sigs.k8s.io=addons-663433 --label created_by.minikube.sigs.k8s.io=true
	I0906 18:30:01.136104    8404 oci.go:103] Successfully created a docker volume addons-663433
	I0906 18:30:01.136202    8404 cli_runner.go:164] Run: docker run --rm --name addons-663433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-663433 --entrypoint /usr/bin/test -v addons-663433:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib
	I0906 18:30:03.332051    8404 cli_runner.go:217] Completed: docker run --rm --name addons-663433-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-663433 --entrypoint /usr/bin/test -v addons-663433:/var gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -d /var/lib: (2.195809836s)
	I0906 18:30:03.332079    8404 oci.go:107] Successfully prepared a docker volume addons-663433
	I0906 18:30:03.332106    8404 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0906 18:30:03.332125    8404 kic.go:194] Starting extracting preloaded images to volume ...
	I0906 18:30:03.332208    8404 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19576-2243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-663433:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir
	I0906 18:30:07.807659    8404 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19576-2243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-663433:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 -I lz4 -xf /preloaded.tar -C /extractDir: (4.475404799s)
	I0906 18:30:07.807692    8404 kic.go:203] duration metric: took 4.475563455s to extract preloaded images to volume ...
	W0906 18:30:07.807834    8404 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0906 18:30:07.807956    8404 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0906 18:30:07.861024    8404 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-663433 --name addons-663433 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-663433 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-663433 --network addons-663433 --ip 192.168.49.2 --volume addons-663433:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85
	I0906 18:30:08.244847    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Running}}
	I0906 18:30:08.263705    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:08.287262    8404 cli_runner.go:164] Run: docker exec addons-663433 stat /var/lib/dpkg/alternatives/iptables
	I0906 18:30:08.355361    8404 oci.go:144] the created container "addons-663433" has a running status.
	I0906 18:30:08.355398    8404 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa...
	I0906 18:30:09.031130    8404 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0906 18:30:09.070457    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:09.093588    8404 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0906 18:30:09.093609    8404 kic_runner.go:114] Args: [docker exec --privileged addons-663433 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0906 18:30:09.159956    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:09.185501    8404 machine.go:93] provisionDockerMachine start ...
	I0906 18:30:09.185586    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:09.204457    8404 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:09.204746    8404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:09.204761    8404 main.go:141] libmachine: About to run SSH command:
	hostname
	I0906 18:30:09.336105    8404 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-663433
	
	I0906 18:30:09.336176    8404 ubuntu.go:169] provisioning hostname "addons-663433"
	I0906 18:30:09.336269    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:09.359692    8404 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:09.359946    8404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:09.359957    8404 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-663433 && echo "addons-663433" | sudo tee /etc/hostname
	I0906 18:30:09.501178    8404 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-663433
	
	I0906 18:30:09.501266    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:09.519225    8404 main.go:141] libmachine: Using SSH client type: native
	I0906 18:30:09.519467    8404 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0906 18:30:09.519484    8404 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-663433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-663433/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-663433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0906 18:30:09.636368    8404 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0906 18:30:09.636393    8404 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19576-2243/.minikube CaCertPath:/home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19576-2243/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19576-2243/.minikube}
	I0906 18:30:09.636422    8404 ubuntu.go:177] setting up certificates
	I0906 18:30:09.636453    8404 provision.go:84] configureAuth start
	I0906 18:30:09.636516    8404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-663433
	I0906 18:30:09.653795    8404 provision.go:143] copyHostCerts
	I0906 18:30:09.653897    8404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19576-2243/.minikube/ca.pem (1078 bytes)
	I0906 18:30:09.654022    8404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19576-2243/.minikube/cert.pem (1123 bytes)
	I0906 18:30:09.654086    8404 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19576-2243/.minikube/key.pem (1675 bytes)
	I0906 18:30:09.654139    8404 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19576-2243/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca-key.pem org=jenkins.addons-663433 san=[127.0.0.1 192.168.49.2 addons-663433 localhost minikube]
	I0906 18:30:10.008760    8404 provision.go:177] copyRemoteCerts
	I0906 18:30:10.008850    8404 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0906 18:30:10.008937    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:10.044563    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:10.146818    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0906 18:30:10.176654    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0906 18:30:10.202916    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0906 18:30:10.228169    8404 provision.go:87] duration metric: took 591.69934ms to configureAuth
	I0906 18:30:10.228238    8404 ubuntu.go:193] setting minikube options for container-runtime
	I0906 18:30:10.228483    8404 config.go:182] Loaded profile config "addons-663433": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 18:30:10.228498    8404 machine.go:96] duration metric: took 1.042979554s to provisionDockerMachine
	I0906 18:30:10.228506    8404 client.go:171] duration metric: took 10.70860249s to LocalClient.Create
	I0906 18:30:10.228522    8404 start.go:167] duration metric: took 10.708664407s to libmachine.API.Create "addons-663433"
	I0906 18:30:10.228530    8404 start.go:293] postStartSetup for "addons-663433" (driver="docker")
	I0906 18:30:10.228539    8404 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0906 18:30:10.228594    8404 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0906 18:30:10.228633    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:10.246030    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:10.337403    8404 ssh_runner.go:195] Run: cat /etc/os-release
	I0906 18:30:10.340525    8404 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0906 18:30:10.340560    8404 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0906 18:30:10.340572    8404 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0906 18:30:10.340584    8404 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0906 18:30:10.340594    8404 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-2243/.minikube/addons for local assets ...
	I0906 18:30:10.340671    8404 filesync.go:126] Scanning /home/jenkins/minikube-integration/19576-2243/.minikube/files for local assets ...
	I0906 18:30:10.340695    8404 start.go:296] duration metric: took 112.159607ms for postStartSetup
	I0906 18:30:10.341018    8404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-663433
	I0906 18:30:10.357443    8404 profile.go:143] Saving config to /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/config.json ...
	I0906 18:30:10.357745    8404 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:30:10.357795    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:10.373935    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:10.461097    8404 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0906 18:30:10.465620    8404 start.go:128] duration metric: took 10.948098934s to createHost
	I0906 18:30:10.465642    8404 start.go:83] releasing machines lock for "addons-663433", held for 10.948230816s
	I0906 18:30:10.465718    8404 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-663433
	I0906 18:30:10.482684    8404 ssh_runner.go:195] Run: cat /version.json
	I0906 18:30:10.482735    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:10.482979    8404 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0906 18:30:10.483042    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:10.500966    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:10.504127    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:10.710402    8404 ssh_runner.go:195] Run: systemctl --version
	I0906 18:30:10.714696    8404 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0906 18:30:10.718718    8404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0906 18:30:10.744153    8404 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0906 18:30:10.744228    8404 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0906 18:30:10.773689    8404 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0906 18:30:10.773723    8404 start.go:495] detecting cgroup driver to use...
	I0906 18:30:10.773757    8404 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0906 18:30:10.773810    8404 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0906 18:30:10.786221    8404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0906 18:30:10.797774    8404 docker.go:217] disabling cri-docker service (if available) ...
	I0906 18:30:10.797840    8404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0906 18:30:10.811675    8404 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0906 18:30:10.826323    8404 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0906 18:30:10.910041    8404 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0906 18:30:11.005951    8404 docker.go:233] disabling docker service ...
	I0906 18:30:11.006061    8404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0906 18:30:11.031089    8404 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0906 18:30:11.045419    8404 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0906 18:30:11.133441    8404 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0906 18:30:11.224611    8404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0906 18:30:11.236294    8404 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0906 18:30:11.252197    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0906 18:30:11.261451    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0906 18:30:11.271358    8404 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0906 18:30:11.271439    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0906 18:30:11.281530    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 18:30:11.291488    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0906 18:30:11.300969    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0906 18:30:11.310450    8404 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0906 18:30:11.319783    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0906 18:30:11.329315    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0906 18:30:11.338831    8404 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0906 18:30:11.348475    8404 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0906 18:30:11.357017    8404 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0906 18:30:11.365475    8404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:11.451877    8404 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0906 18:30:11.585107    8404 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0906 18:30:11.585243    8404 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0906 18:30:11.588851    8404 start.go:563] Will wait 60s for crictl version
	I0906 18:30:11.588956    8404 ssh_runner.go:195] Run: which crictl
	I0906 18:30:11.592164    8404 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0906 18:30:11.633305    8404 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.21
	RuntimeApiVersion:  v1
	I0906 18:30:11.633449    8404 ssh_runner.go:195] Run: containerd --version
	I0906 18:30:11.655037    8404 ssh_runner.go:195] Run: containerd --version
	I0906 18:30:11.679260    8404 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.21 ...
	I0906 18:30:11.680930    8404 cli_runner.go:164] Run: docker network inspect addons-663433 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0906 18:30:11.704892    8404 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0906 18:30:11.708320    8404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:30:11.718846    8404 kubeadm.go:883] updating cluster {Name:addons-663433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-663433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0906 18:30:11.718959    8404 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0906 18:30:11.719026    8404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:30:11.756137    8404 containerd.go:627] all images are preloaded for containerd runtime.
	I0906 18:30:11.756157    8404 containerd.go:534] Images already preloaded, skipping extraction
	I0906 18:30:11.756237    8404 ssh_runner.go:195] Run: sudo crictl images --output json
	I0906 18:30:11.792226    8404 containerd.go:627] all images are preloaded for containerd runtime.
	I0906 18:30:11.792245    8404 cache_images.go:84] Images are preloaded, skipping loading
	I0906 18:30:11.792254    8404 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0906 18:30:11.792356    8404 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-663433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-663433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0906 18:30:11.792421    8404 ssh_runner.go:195] Run: sudo crictl info
	I0906 18:30:11.832395    8404 cni.go:84] Creating CNI manager for ""
	I0906 18:30:11.832414    8404 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0906 18:30:11.832446    8404 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0906 18:30:11.832469    8404 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-663433 NodeName:addons-663433 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0906 18:30:11.832601    8404 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-663433"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0906 18:30:11.832666    8404 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0906 18:30:11.841640    8404 binaries.go:44] Found k8s binaries, skipping transfer
	I0906 18:30:11.841719    8404 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0906 18:30:11.850504    8404 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0906 18:30:11.868794    8404 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0906 18:30:11.887036    8404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0906 18:30:11.905102    8404 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0906 18:30:11.908352    8404 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0906 18:30:11.918969    8404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:12.000581    8404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:12.019851    8404 certs.go:68] Setting up /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433 for IP: 192.168.49.2
	I0906 18:30:12.019927    8404 certs.go:194] generating shared ca certs ...
	I0906 18:30:12.019959    8404 certs.go:226] acquiring lock for ca certs: {Name:mkecee9b61fe634f9c37a64b8e0f0f4431b3dfc1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:12.020163    8404 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19576-2243/.minikube/ca.key
	I0906 18:30:12.141864    8404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2243/.minikube/ca.crt ...
	I0906 18:30:12.141898    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/ca.crt: {Name:mk9de33734b65cc7acaaacb6bd8a8ec6afb72442 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:12.142130    8404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2243/.minikube/ca.key ...
	I0906 18:30:12.142146    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/ca.key: {Name:mk0bc0615acb44a1f18193d015def0d75476785e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:12.142235    8404 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19576-2243/.minikube/proxy-client-ca.key
	I0906 18:30:12.455645    8404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2243/.minikube/proxy-client-ca.crt ...
	I0906 18:30:12.455674    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/proxy-client-ca.crt: {Name:mk4dd3decc50b1db80fff0220c5d5c4fec7904e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:12.455854    8404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2243/.minikube/proxy-client-ca.key ...
	I0906 18:30:12.455868    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/proxy-client-ca.key: {Name:mk5e62d01ac3507ef6d58d867b6df7b9d049011f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:12.455957    8404 certs.go:256] generating profile certs ...
	I0906 18:30:12.456014    8404 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.key
	I0906 18:30:12.456036    8404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt with IP's: []
	I0906 18:30:13.463080    8404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt ...
	I0906 18:30:13.463115    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: {Name:mkb0d20972088ea226fac2f9a798672aa7e81e90 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:13.463306    8404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.key ...
	I0906 18:30:13.463319    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.key: {Name:mkccab469b7a9a7f636ea40ccaa6e7783aaf6e0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:13.463404    8404 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.key.5a5ebd75
	I0906 18:30:13.463425    8404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.crt.5a5ebd75 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0906 18:30:13.934902    8404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.crt.5a5ebd75 ...
	I0906 18:30:13.934938    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.crt.5a5ebd75: {Name:mk8cec109b80565cf177b527e72b35225e348e42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:13.935124    8404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.key.5a5ebd75 ...
	I0906 18:30:13.935140    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.key.5a5ebd75: {Name:mk7673ec75db71d7d80c529c3ca5323795b46339 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:13.935222    8404 certs.go:381] copying /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.crt.5a5ebd75 -> /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.crt
	I0906 18:30:13.935298    8404 certs.go:385] copying /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.key.5a5ebd75 -> /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.key
	I0906 18:30:13.935354    8404 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/proxy-client.key
	I0906 18:30:13.935376    8404 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/proxy-client.crt with IP's: []
	I0906 18:30:14.360548    8404 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/proxy-client.crt ...
	I0906 18:30:14.360580    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/proxy-client.crt: {Name:mkf06c8334f0ca2674ded8e276c885ba60ee374c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:14.360768    8404 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/proxy-client.key ...
	I0906 18:30:14.360782    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/proxy-client.key: {Name:mkd1415818505325f29f816ba22b10d5154c6c53 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:14.360969    8404 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca-key.pem (1675 bytes)
	I0906 18:30:14.361011    8404 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/ca.pem (1078 bytes)
	I0906 18:30:14.361042    8404 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/cert.pem (1123 bytes)
	I0906 18:30:14.361069    8404 certs.go:484] found cert: /home/jenkins/minikube-integration/19576-2243/.minikube/certs/key.pem (1675 bytes)
	I0906 18:30:14.361673    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0906 18:30:14.390685    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0906 18:30:14.414870    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0906 18:30:14.441123    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0906 18:30:14.465130    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0906 18:30:14.489527    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0906 18:30:14.516513    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0906 18:30:14.541919    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0906 18:30:14.569959    8404 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19576-2243/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0906 18:30:14.594863    8404 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0906 18:30:14.612857    8404 ssh_runner.go:195] Run: openssl version
	I0906 18:30:14.618284    8404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0906 18:30:14.627834    8404 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:14.631419    8404 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep  6 18:30 /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:14.631484    8404 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0906 18:30:14.638344    8404 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0906 18:30:14.647884    8404 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0906 18:30:14.651028    8404 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0906 18:30:14.651074    8404 kubeadm.go:392] StartCluster: {Name:addons-663433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-663433 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: Socket
VMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:30:14.651159    8404 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0906 18:30:14.651216    8404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0906 18:30:14.687709    8404 cri.go:89] found id: ""
	I0906 18:30:14.687779    8404 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0906 18:30:14.696983    8404 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0906 18:30:14.706395    8404 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0906 18:30:14.706503    8404 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0906 18:30:14.715324    8404 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0906 18:30:14.715344    8404 kubeadm.go:157] found existing configuration files:
	
	I0906 18:30:14.715415    8404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0906 18:30:14.724943    8404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0906 18:30:14.725007    8404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0906 18:30:14.733285    8404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0906 18:30:14.742434    8404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0906 18:30:14.742501    8404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0906 18:30:14.751470    8404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0906 18:30:14.760223    8404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0906 18:30:14.760309    8404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0906 18:30:14.769315    8404 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0906 18:30:14.778454    8404 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0906 18:30:14.778516    8404 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0906 18:30:14.786968    8404 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0906 18:30:14.828413    8404 kubeadm.go:310] W0906 18:30:14.827768    1025 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:14.829214    8404 kubeadm.go:310] W0906 18:30:14.828736    1025 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0906 18:30:14.851386    8404 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1068-aws\n", err: exit status 1
	I0906 18:30:14.916928    8404 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0906 18:30:32.934333    8404 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0906 18:30:32.934394    8404 kubeadm.go:310] [preflight] Running pre-flight checks
	I0906 18:30:32.934482    8404 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0906 18:30:32.934540    8404 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1068-aws
	I0906 18:30:32.934576    8404 kubeadm.go:310] OS: Linux
	I0906 18:30:32.934625    8404 kubeadm.go:310] CGROUPS_CPU: enabled
	I0906 18:30:32.934676    8404 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0906 18:30:32.934734    8404 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0906 18:30:32.934786    8404 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0906 18:30:32.934837    8404 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0906 18:30:32.934889    8404 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0906 18:30:32.934937    8404 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0906 18:30:32.934986    8404 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0906 18:30:32.935036    8404 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0906 18:30:32.935108    8404 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0906 18:30:32.935206    8404 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0906 18:30:32.935296    8404 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0906 18:30:32.935359    8404 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0906 18:30:32.937653    8404 out.go:235]   - Generating certificates and keys ...
	I0906 18:30:32.937746    8404 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0906 18:30:32.937819    8404 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0906 18:30:32.937909    8404 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0906 18:30:32.937970    8404 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0906 18:30:32.938040    8404 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0906 18:30:32.938106    8404 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0906 18:30:32.938163    8404 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0906 18:30:32.938291    8404 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-663433 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 18:30:32.938366    8404 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0906 18:30:32.938490    8404 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-663433 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0906 18:30:32.938586    8404 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0906 18:30:32.938677    8404 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0906 18:30:32.938732    8404 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0906 18:30:32.938806    8404 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0906 18:30:32.938856    8404 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0906 18:30:32.938911    8404 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0906 18:30:32.938977    8404 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0906 18:30:32.939059    8404 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0906 18:30:32.939118    8404 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0906 18:30:32.939201    8404 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0906 18:30:32.939312    8404 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0906 18:30:32.942970    8404 out.go:235]   - Booting up control plane ...
	I0906 18:30:32.943103    8404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0906 18:30:32.943202    8404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0906 18:30:32.943281    8404 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0906 18:30:32.943393    8404 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0906 18:30:32.943487    8404 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0906 18:30:32.943532    8404 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0906 18:30:32.943670    8404 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0906 18:30:32.943784    8404 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0906 18:30:32.943849    8404 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501617443s
	I0906 18:30:32.943926    8404 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0906 18:30:32.943989    8404 kubeadm.go:310] [api-check] The API server is healthy after 6.501429354s
	I0906 18:30:32.944102    8404 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0906 18:30:32.944235    8404 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0906 18:30:32.944298    8404 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0906 18:30:32.944544    8404 kubeadm.go:310] [mark-control-plane] Marking the node addons-663433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0906 18:30:32.944608    8404 kubeadm.go:310] [bootstrap-token] Using token: vgvso8.qrt82sa195l4l3t2
	I0906 18:30:32.946794    8404 out.go:235]   - Configuring RBAC rules ...
	I0906 18:30:32.946933    8404 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0906 18:30:32.947020    8404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0906 18:30:32.947171    8404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0906 18:30:32.947398    8404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0906 18:30:32.947523    8404 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0906 18:30:32.947619    8404 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0906 18:30:32.947746    8404 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0906 18:30:32.947789    8404 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0906 18:30:32.947847    8404 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0906 18:30:32.947852    8404 kubeadm.go:310] 
	I0906 18:30:32.947931    8404 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0906 18:30:32.947950    8404 kubeadm.go:310] 
	I0906 18:30:32.948026    8404 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0906 18:30:32.948036    8404 kubeadm.go:310] 
	I0906 18:30:32.948062    8404 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0906 18:30:32.948135    8404 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0906 18:30:32.948189    8404 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0906 18:30:32.948193    8404 kubeadm.go:310] 
	I0906 18:30:32.948257    8404 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0906 18:30:32.948272    8404 kubeadm.go:310] 
	I0906 18:30:32.948341    8404 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0906 18:30:32.948349    8404 kubeadm.go:310] 
	I0906 18:30:32.948407    8404 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0906 18:30:32.948505    8404 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0906 18:30:32.948587    8404 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0906 18:30:32.948615    8404 kubeadm.go:310] 
	I0906 18:30:32.948705    8404 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0906 18:30:32.948786    8404 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0906 18:30:32.948793    8404 kubeadm.go:310] 
	I0906 18:30:32.948883    8404 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token vgvso8.qrt82sa195l4l3t2 \
	I0906 18:30:32.948986    8404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3e70934fcf4233cfe3d6812e6403cbdb3820fe68f517883d6e328191b36dbf1b \
	I0906 18:30:32.949013    8404 kubeadm.go:310] 	--control-plane 
	I0906 18:30:32.949020    8404 kubeadm.go:310] 
	I0906 18:30:32.949103    8404 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0906 18:30:32.949114    8404 kubeadm.go:310] 
	I0906 18:30:32.949199    8404 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token vgvso8.qrt82sa195l4l3t2 \
	I0906 18:30:32.949315    8404 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3e70934fcf4233cfe3d6812e6403cbdb3820fe68f517883d6e328191b36dbf1b 
	I0906 18:30:32.949327    8404 cni.go:84] Creating CNI manager for ""
	I0906 18:30:32.949334    8404 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0906 18:30:32.951296    8404 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0906 18:30:32.953665    8404 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0906 18:30:32.957600    8404 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0906 18:30:32.957624    8404 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0906 18:30:32.976321    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0906 18:30:33.274191    8404 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0906 18:30:33.274321    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:33.274416    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-663433 minikube.k8s.io/updated_at=2024_09_06T18_30_33_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13 minikube.k8s.io/name=addons-663433 minikube.k8s.io/primary=true
	I0906 18:30:33.435452    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:33.435512    8404 ops.go:34] apiserver oom_adj: -16
	I0906 18:30:33.936550    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:34.435599    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:34.936389    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:35.435687    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:35.935855    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:36.436131    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:36.935729    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:37.436331    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:37.936111    8404 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0906 18:30:38.080340    8404 kubeadm.go:1113] duration metric: took 4.806060876s to wait for elevateKubeSystemPrivileges
	I0906 18:30:38.080367    8404 kubeadm.go:394] duration metric: took 23.429296531s to StartCluster
	I0906 18:30:38.080383    8404 settings.go:142] acquiring lock: {Name:mk987d84b6291c2a933905b01eee0e827729a585 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:38.080582    8404 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19576-2243/kubeconfig
	I0906 18:30:38.081018    8404 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19576-2243/kubeconfig: {Name:mkdfa45d714bcc81f89171296a3ba179305fec36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0906 18:30:38.081219    8404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0906 18:30:38.081246    8404 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0906 18:30:38.081624    8404 config.go:182] Loaded profile config "addons-663433": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 18:30:38.081686    8404 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0906 18:30:38.081773    8404 addons.go:69] Setting yakd=true in profile "addons-663433"
	I0906 18:30:38.081803    8404 addons.go:234] Setting addon yakd=true in "addons-663433"
	I0906 18:30:38.081834    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.082457    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.082945    8404 addons.go:69] Setting metrics-server=true in profile "addons-663433"
	I0906 18:30:38.082981    8404 addons.go:234] Setting addon metrics-server=true in "addons-663433"
	I0906 18:30:38.083019    8404 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-663433"
	I0906 18:30:38.083044    8404 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-663433"
	I0906 18:30:38.083074    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.083496    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.083626    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.084128    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.084972    8404 addons.go:69] Setting registry=true in profile "addons-663433"
	I0906 18:30:38.085024    8404 addons.go:234] Setting addon registry=true in "addons-663433"
	I0906 18:30:38.085060    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.085539    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.086032    8404 addons.go:69] Setting storage-provisioner=true in profile "addons-663433"
	I0906 18:30:38.086064    8404 addons.go:234] Setting addon storage-provisioner=true in "addons-663433"
	I0906 18:30:38.086124    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.086650    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.092873    8404 addons.go:69] Setting cloud-spanner=true in profile "addons-663433"
	I0906 18:30:38.092921    8404 addons.go:234] Setting addon cloud-spanner=true in "addons-663433"
	I0906 18:30:38.092961    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.093418    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.093593    8404 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-663433"
	I0906 18:30:38.093620    8404 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-663433"
	I0906 18:30:38.093898    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.104940    8404 addons.go:69] Setting volcano=true in profile "addons-663433"
	I0906 18:30:38.104994    8404 addons.go:234] Setting addon volcano=true in "addons-663433"
	I0906 18:30:38.105032    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.105526    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.108666    8404 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-663433"
	I0906 18:30:38.117189    8404 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-663433"
	I0906 18:30:38.117294    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.117884    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.112310    8404 addons.go:69] Setting default-storageclass=true in profile "addons-663433"
	I0906 18:30:38.121670    8404 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-663433"
	I0906 18:30:38.112358    8404 addons.go:69] Setting gcp-auth=true in profile "addons-663433"
	I0906 18:30:38.112383    8404 addons.go:69] Setting ingress=true in profile "addons-663433"
	I0906 18:30:38.112392    8404 addons.go:69] Setting ingress-dns=true in profile "addons-663433"
	I0906 18:30:38.112397    8404 addons.go:69] Setting inspektor-gadget=true in profile "addons-663433"
	I0906 18:30:38.112408    8404 out.go:177] * Verifying Kubernetes components...
	I0906 18:30:38.136949    8404 addons.go:69] Setting volumesnapshots=true in profile "addons-663433"
	I0906 18:30:38.136990    8404 addons.go:234] Setting addon volumesnapshots=true in "addons-663433"
	I0906 18:30:38.137027    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.137722    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.148570    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.160722    8404 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0906 18:30:38.181262    8404 addons.go:234] Setting addon ingress-dns=true in "addons-663433"
	I0906 18:30:38.181376    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.181983    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.187457    8404 addons.go:234] Setting addon inspektor-gadget=true in "addons-663433"
	I0906 18:30:38.187569    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.195741    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.211820    8404 mustload.go:65] Loading cluster: addons-663433
	I0906 18:30:38.212081    8404 config.go:182] Loaded profile config "addons-663433": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 18:30:38.212391    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.250028    8404 addons.go:234] Setting addon ingress=true in "addons-663433"
	I0906 18:30:38.250137    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.250593    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.274405    8404 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0906 18:30:38.276659    8404 out.go:177]   - Using image docker.io/registry:2.8.3
	I0906 18:30:38.278342    8404 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0906 18:30:38.278497    8404 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0906 18:30:38.290531    8404 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0906 18:30:38.295028    8404 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0906 18:30:38.295088    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0906 18:30:38.295169    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.300674    8404 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0906 18:30:38.300738    8404 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0906 18:30:38.300891    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.304309    8404 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0906 18:30:38.304331    8404 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0906 18:30:38.304397    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.305436    8404 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0906 18:30:38.309003    8404 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0906 18:30:38.311051    8404 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0906 18:30:38.314156    8404 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:38.314181    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0906 18:30:38.314249    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.340646    8404 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:38.340668    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0906 18:30:38.340726    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.342688    8404 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0906 18:30:38.344655    8404 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:38.344677    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0906 18:30:38.344745    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.376952    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0906 18:30:38.379573    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0906 18:30:38.382075    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0906 18:30:38.385942    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0906 18:30:38.388506    8404 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0906 18:30:38.388977    8404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0906 18:30:38.388999    8404 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0906 18:30:38.389179    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.391464    8404 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-663433"
	I0906 18:30:38.391554    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.392203    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.399662    8404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:38.399683    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0906 18:30:38.399800    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.405777    8404 addons.go:234] Setting addon default-storageclass=true in "addons-663433"
	I0906 18:30:38.405820    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.406443    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:38.438094    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0906 18:30:38.443347    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0906 18:30:38.446600    8404 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0906 18:30:38.454349    8404 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0906 18:30:38.454378    8404 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0906 18:30:38.454445    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.463804    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:38.463818    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0906 18:30:38.465900    8404 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0906 18:30:38.475945    8404 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:38.475969    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0906 18:30:38.476032    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.476711    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0906 18:30:38.479023    8404 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0906 18:30:38.492968    8404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0906 18:30:38.492993    8404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0906 18:30:38.493098    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.519735    8404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0906 18:30:38.524600    8404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:38.526726    8404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:38.531864    8404 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:38.531885    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0906 18:30:38.531949    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.549912    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.568127    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.572625    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.614194    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.639441    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.649241    8404 out.go:177]   - Using image docker.io/busybox:stable
	I0906 18:30:38.652375    8404 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0906 18:30:38.654242    8404 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:38.654264    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0906 18:30:38.655919    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.664900    8404 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:38.664921    8404 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0906 18:30:38.664979    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:38.680037    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.680410    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.680899    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.684396    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.695704    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.753419    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.753523    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.774932    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.775317    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:38.816222    8404 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0906 18:30:38.816316    8404 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0906 18:30:39.229872    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0906 18:30:39.233612    8404 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0906 18:30:39.233640    8404 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0906 18:30:39.249134    8404 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0906 18:30:39.249166    8404 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0906 18:30:39.313857    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0906 18:30:39.368597    8404 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0906 18:30:39.368636    8404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0906 18:30:39.372757    8404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0906 18:30:39.372777    8404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0906 18:30:39.464994    8404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0906 18:30:39.465017    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0906 18:30:39.474175    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0906 18:30:39.476259    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0906 18:30:39.491763    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0906 18:30:39.502154    8404 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0906 18:30:39.502228    8404 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0906 18:30:39.517650    8404 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0906 18:30:39.517698    8404 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0906 18:30:39.551196    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0906 18:30:39.591699    8404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0906 18:30:39.591725    8404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0906 18:30:39.659518    8404 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:39.659549    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0906 18:30:39.664956    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0906 18:30:39.672046    8404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0906 18:30:39.672071    8404 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0906 18:30:39.686095    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0906 18:30:39.691743    8404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0906 18:30:39.691769    8404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0906 18:30:39.754638    8404 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0906 18:30:39.754665    8404 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0906 18:30:39.816608    8404 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0906 18:30:39.816635    8404 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0906 18:30:39.819233    8404 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0906 18:30:39.819265    8404 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0906 18:30:39.939589    8404 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:39.939631    8404 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0906 18:30:39.942624    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0906 18:30:39.975025    8404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0906 18:30:39.975060    8404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0906 18:30:40.038717    8404 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:40.038746    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0906 18:30:40.208895    8404 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0906 18:30:40.208922    8404 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0906 18:30:40.252678    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0906 18:30:40.349902    8404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0906 18:30:40.349945    8404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0906 18:30:40.409976    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0906 18:30:40.532477    8404 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0906 18:30:40.532515    8404 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0906 18:30:40.543418    8404 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0906 18:30:40.543445    8404 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0906 18:30:40.802917    8404 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0906 18:30:40.802943    8404 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0906 18:30:40.826811    8404 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:40.826834    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0906 18:30:40.888264    8404 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0906 18:30:40.888305    8404 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0906 18:30:41.072728    8404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0906 18:30:41.072752    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0906 18:30:41.167645    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:41.231265    8404 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0906 18:30:41.231291    8404 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0906 18:30:41.244820    8404 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.428480073s)
	I0906 18:30:41.244871    8404 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.428626412s)
	I0906 18:30:41.244883    8404 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0906 18:30:41.245037    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.015133256s)
	I0906 18:30:41.246439    8404 node_ready.go:35] waiting up to 6m0s for node "addons-663433" to be "Ready" ...
	I0906 18:30:41.255325    8404 node_ready.go:49] node "addons-663433" has status "Ready":"True"
	I0906 18:30:41.255354    8404 node_ready.go:38] duration metric: took 8.884593ms for node "addons-663433" to be "Ready" ...
	I0906 18:30:41.255365    8404 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:30:41.291918    8404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace to be "Ready" ...
	I0906 18:30:41.527358    8404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0906 18:30:41.527383    8404 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0906 18:30:41.602614    8404 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:41.602639    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0906 18:30:41.750895    8404 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-663433" context rescaled to 1 replicas
	I0906 18:30:41.847378    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0906 18:30:42.028633    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.714730759s)
	I0906 18:30:42.120577    8404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0906 18:30:42.120613    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0906 18:30:42.481649    8404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0906 18:30:42.481674    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0906 18:30:42.822318    8404 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:42.822344    8404 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0906 18:30:43.119424    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0906 18:30:43.313497    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:45.684191    8404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0906 18:30:45.684268    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:45.715288    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:45.904022    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:46.230835    8404 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0906 18:30:46.249202    8404 addons.go:234] Setting addon gcp-auth=true in "addons-663433"
	I0906 18:30:46.249253    8404 host.go:66] Checking if "addons-663433" exists ...
	I0906 18:30:46.249701    8404 cli_runner.go:164] Run: docker container inspect addons-663433 --format={{.State.Status}}
	I0906 18:30:46.273897    8404 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0906 18:30:46.273963    8404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-663433
	I0906 18:30:46.302619    8404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/addons-663433/id_rsa Username:docker}
	I0906 18:30:48.384026    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:49.039199    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.564979366s)
	I0906 18:30:49.039327    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (9.563005384s)
	I0906 18:30:49.039462    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.547628693s)
	I0906 18:30:49.039495    8404 addons.go:475] Verifying addon ingress=true in "addons-663433"
	I0906 18:30:49.039854    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.488624462s)
	I0906 18:30:49.039949    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.374971541s)
	I0906 18:30:49.040027    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (9.353896694s)
	I0906 18:30:49.040222    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.097572637s)
	I0906 18:30:49.040246    8404 addons.go:475] Verifying addon registry=true in "addons-663433"
	I0906 18:30:49.040411    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.787701652s)
	I0906 18:30:49.040439    8404 addons.go:475] Verifying addon metrics-server=true in "addons-663433"
	I0906 18:30:49.040483    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.630483096s)
	I0906 18:30:49.040848    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.873166298s)
	W0906 18:30:49.041306    8404 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:49.041332    8404 retry.go:31] will retry after 302.498734ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0906 18:30:49.040911    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.193506433s)
	I0906 18:30:49.041713    8404 out.go:177] * Verifying ingress addon...
	I0906 18:30:49.043059    8404 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-663433 service yakd-dashboard -n yakd-dashboard
	
	I0906 18:30:49.043098    8404 out.go:177] * Verifying registry addon...
	I0906 18:30:49.045186    8404 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0906 18:30:49.048005    8404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0906 18:30:49.103273    8404 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0906 18:30:49.103300    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.104547    8404 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0906 18:30:49.104569    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.344677    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0906 18:30:49.556023    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:49.556931    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:49.809346    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.689812074s)
	I0906 18:30:49.809385    8404 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-663433"
	I0906 18:30:49.809647    8404 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.535726057s)
	I0906 18:30:49.812123    8404 out.go:177] * Verifying csi-hostpath-driver addon...
	I0906 18:30:49.812161    8404 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0906 18:30:49.818771    8404 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0906 18:30:49.819486    8404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0906 18:30:49.821338    8404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0906 18:30:49.821394    8404 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0906 18:30:49.828382    8404 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0906 18:30:49.828524    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:49.879702    8404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0906 18:30:49.879744    8404 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0906 18:30:49.949442    8404 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:49.949466    8404 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0906 18:30:50.021150    8404 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0906 18:30:50.061667    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.063359    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.326578    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.549786    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:50.552979    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:50.798505    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:50.824927    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:50.853017    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.508241892s)
	I0906 18:30:51.053081    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.071552    8404 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.050352752s)
	I0906 18:30:51.074810    8404 addons.go:475] Verifying addon gcp-auth=true in "addons-663433"
	I0906 18:30:51.077404    8404 out.go:177] * Verifying gcp-auth addon...
	I0906 18:30:51.080984    8404 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0906 18:30:51.152548    8404 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:30:51.153982    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.326056    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:51.550423    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:51.552534    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:51.825147    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.057053    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.061730    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.325427    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:52.550130    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:52.553486    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:52.800333    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:52.826185    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.049862    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.052416    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.326152    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:53.552489    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:53.560053    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:53.824385    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.053401    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.057204    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.328054    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:54.551981    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:54.553806    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:54.825003    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.092062    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.092616    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.299250    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:55.328673    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:55.556227    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:55.558128    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:55.824162    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.050018    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.051792    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.326448    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:56.552506    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:56.553965    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:56.824058    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.052210    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.055618    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.326409    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:57.550781    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:57.552732    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:57.798294    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:57.825612    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.051799    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.054585    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.324790    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:58.550288    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:58.552455    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:58.824163    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.050302    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.052059    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.324001    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:30:59.555727    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:30:59.557634    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:30:59.798403    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:30:59.825775    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.061835    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.065006    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.381057    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:00.549980    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:00.554155    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:00.825472    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.051262    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.053813    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.325525    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:01.553627    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:01.558169    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:01.800184    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:31:01.826051    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.056278    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.057549    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.325433    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:02.550601    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:02.553803    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:02.824910    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.051261    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.054671    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.324502    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:03.549098    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:03.552521    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:03.824497    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.050392    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.052621    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:04.299097    8404 pod_ready.go:103] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"False"
	I0906 18:31:04.324824    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:04.552472    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:04.554536    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:04.828581    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.055373    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:05.064706    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.323935    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.549084    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:05.552725    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:05.799319    8404 pod_ready.go:93] pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:05.799389    8404 pod_ready.go:82] duration metric: took 24.507435854s for pod "coredns-6f6b679f8f-hmcbm" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.799417    8404 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-pbkgn" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.801826    8404 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-pbkgn" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-pbkgn" not found
	I0906 18:31:05.801854    8404 pod_ready.go:82] duration metric: took 2.409873ms for pod "coredns-6f6b679f8f-pbkgn" in "kube-system" namespace to be "Ready" ...
	E0906 18:31:05.801894    8404 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-pbkgn" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-pbkgn" not found
	I0906 18:31:05.801908    8404 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-663433" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.807831    8404 pod_ready.go:93] pod "etcd-addons-663433" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:05.807859    8404 pod_ready.go:82] duration metric: took 5.941124ms for pod "etcd-addons-663433" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.807874    8404 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-663433" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.814621    8404 pod_ready.go:93] pod "kube-apiserver-addons-663433" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:05.814657    8404 pod_ready.go:82] duration metric: took 6.748388ms for pod "kube-apiserver-addons-663433" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.814688    8404 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-663433" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.821751    8404 pod_ready.go:93] pod "kube-controller-manager-addons-663433" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:05.821782    8404 pod_ready.go:82] duration metric: took 7.077361ms for pod "kube-controller-manager-addons-663433" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.821795    8404 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sxwfw" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.830224    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:05.996867    8404 pod_ready.go:93] pod "kube-proxy-sxwfw" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:05.996893    8404 pod_ready.go:82] duration metric: took 175.073895ms for pod "kube-proxy-sxwfw" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:05.996905    8404 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-663433" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:06.058611    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:06.060796    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.335375    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:06.396566    8404 pod_ready.go:93] pod "kube-scheduler-addons-663433" in "kube-system" namespace has status "Ready":"True"
	I0906 18:31:06.396592    8404 pod_ready.go:82] duration metric: took 399.679146ms for pod "kube-scheduler-addons-663433" in "kube-system" namespace to be "Ready" ...
	I0906 18:31:06.396602    8404 pod_ready.go:39] duration metric: took 25.14122442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0906 18:31:06.396617    8404 api_server.go:52] waiting for apiserver process to appear ...
	I0906 18:31:06.396680    8404 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:31:06.410853    8404 api_server.go:72] duration metric: took 28.329580077s to wait for apiserver process to appear ...
	I0906 18:31:06.410881    8404 api_server.go:88] waiting for apiserver healthz status ...
	I0906 18:31:06.410901    8404 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0906 18:31:06.419049    8404 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0906 18:31:06.420007    8404 api_server.go:141] control plane version: v1.31.0
	I0906 18:31:06.420032    8404 api_server.go:131] duration metric: took 9.143821ms to wait for apiserver health ...
	I0906 18:31:06.420041    8404 system_pods.go:43] waiting for kube-system pods to appear ...
	I0906 18:31:06.549385    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:06.551544    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:06.603769    8404 system_pods.go:59] 18 kube-system pods found
	I0906 18:31:06.603857    8404 system_pods.go:61] "coredns-6f6b679f8f-hmcbm" [26228436-d79a-4076-b250-e182190de691] Running
	I0906 18:31:06.603875    8404 system_pods.go:61] "csi-hostpath-attacher-0" [e5715893-2855-4a31-99a5-f1cca7249f48] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:31:06.603884    8404 system_pods.go:61] "csi-hostpath-resizer-0" [46dc9d17-db0a-44aa-b242-574a62883d93] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:31:06.603896    8404 system_pods.go:61] "csi-hostpathplugin-zbfgn" [bcdaf2c1-3b51-4a7d-bfcf-eacbe2574810] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:31:06.603902    8404 system_pods.go:61] "etcd-addons-663433" [6c759c3f-ce71-4631-803a-baefbe9b1f01] Running
	I0906 18:31:06.603915    8404 system_pods.go:61] "kindnet-fhj4d" [51b5e270-e337-466e-a560-38ad5a66d052] Running
	I0906 18:31:06.603919    8404 system_pods.go:61] "kube-apiserver-addons-663433" [2fe8bc4c-4e9c-43d0-ab0f-bdd29a815e65] Running
	I0906 18:31:06.603923    8404 system_pods.go:61] "kube-controller-manager-addons-663433" [d0a7b1e4-017f-41a6-8327-16c1ad00c1af] Running
	I0906 18:31:06.603927    8404 system_pods.go:61] "kube-ingress-dns-minikube" [c692d8a4-0702-4e97-8ec4-9d89209895d9] Running
	I0906 18:31:06.603931    8404 system_pods.go:61] "kube-proxy-sxwfw" [3f9bf514-e2a1-4e2d-950b-f7c589abfeb9] Running
	I0906 18:31:06.603935    8404 system_pods.go:61] "kube-scheduler-addons-663433" [bd451a12-9231-4fe5-b3a6-e0d9d0867e9d] Running
	I0906 18:31:06.603942    8404 system_pods.go:61] "metrics-server-84c5f94fbc-8t4sb" [7f4455d9-98c4-4e73-a397-db01404c289b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:31:06.603953    8404 system_pods.go:61] "nvidia-device-plugin-daemonset-2qpzm" [7d6ff840-2efe-49b7-a74d-10e6df066685] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0906 18:31:06.603959    8404 system_pods.go:61] "registry-6fb4cdfc84-f8b5k" [26e1baf1-1b46-42bf-a4e7-b02a3ee2ca41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:31:06.603970    8404 system_pods.go:61] "registry-proxy-mhbhh" [9fe29033-57ef-4d7d-917e-044ec00706a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:31:06.603976    8404 system_pods.go:61] "snapshot-controller-56fcc65765-n54gm" [1810833e-9a32-4693-b262-b9ea28499187] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:31:06.603985    8404 system_pods.go:61] "snapshot-controller-56fcc65765-vtq6w" [b82e242a-6437-4df5-97bb-e56e5fbca8d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:31:06.603990    8404 system_pods.go:61] "storage-provisioner" [3a6515a1-2d21-444d-a495-9e965ae45660] Running
	I0906 18:31:06.603998    8404 system_pods.go:74] duration metric: took 183.951885ms to wait for pod list to return data ...
	I0906 18:31:06.604008    8404 default_sa.go:34] waiting for default service account to be created ...
	I0906 18:31:06.796128    8404 default_sa.go:45] found service account: "default"
	I0906 18:31:06.796153    8404 default_sa.go:55] duration metric: took 192.138678ms for default service account to be created ...
	I0906 18:31:06.796164    8404 system_pods.go:116] waiting for k8s-apps to be running ...
	I0906 18:31:06.824838    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.002852    8404 system_pods.go:86] 18 kube-system pods found
	I0906 18:31:07.002892    8404 system_pods.go:89] "coredns-6f6b679f8f-hmcbm" [26228436-d79a-4076-b250-e182190de691] Running
	I0906 18:31:07.002903    8404 system_pods.go:89] "csi-hostpath-attacher-0" [e5715893-2855-4a31-99a5-f1cca7249f48] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0906 18:31:07.002910    8404 system_pods.go:89] "csi-hostpath-resizer-0" [46dc9d17-db0a-44aa-b242-574a62883d93] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0906 18:31:07.002919    8404 system_pods.go:89] "csi-hostpathplugin-zbfgn" [bcdaf2c1-3b51-4a7d-bfcf-eacbe2574810] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0906 18:31:07.002950    8404 system_pods.go:89] "etcd-addons-663433" [6c759c3f-ce71-4631-803a-baefbe9b1f01] Running
	I0906 18:31:07.002962    8404 system_pods.go:89] "kindnet-fhj4d" [51b5e270-e337-466e-a560-38ad5a66d052] Running
	I0906 18:31:07.002967    8404 system_pods.go:89] "kube-apiserver-addons-663433" [2fe8bc4c-4e9c-43d0-ab0f-bdd29a815e65] Running
	I0906 18:31:07.002972    8404 system_pods.go:89] "kube-controller-manager-addons-663433" [d0a7b1e4-017f-41a6-8327-16c1ad00c1af] Running
	I0906 18:31:07.002980    8404 system_pods.go:89] "kube-ingress-dns-minikube" [c692d8a4-0702-4e97-8ec4-9d89209895d9] Running
	I0906 18:31:07.002984    8404 system_pods.go:89] "kube-proxy-sxwfw" [3f9bf514-e2a1-4e2d-950b-f7c589abfeb9] Running
	I0906 18:31:07.002991    8404 system_pods.go:89] "kube-scheduler-addons-663433" [bd451a12-9231-4fe5-b3a6-e0d9d0867e9d] Running
	I0906 18:31:07.002999    8404 system_pods.go:89] "metrics-server-84c5f94fbc-8t4sb" [7f4455d9-98c4-4e73-a397-db01404c289b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0906 18:31:07.003003    8404 system_pods.go:89] "nvidia-device-plugin-daemonset-2qpzm" [7d6ff840-2efe-49b7-a74d-10e6df066685] Running
	I0906 18:31:07.003026    8404 system_pods.go:89] "registry-6fb4cdfc84-f8b5k" [26e1baf1-1b46-42bf-a4e7-b02a3ee2ca41] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0906 18:31:07.003039    8404 system_pods.go:89] "registry-proxy-mhbhh" [9fe29033-57ef-4d7d-917e-044ec00706a3] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0906 18:31:07.003047    8404 system_pods.go:89] "snapshot-controller-56fcc65765-n54gm" [1810833e-9a32-4693-b262-b9ea28499187] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:31:07.003057    8404 system_pods.go:89] "snapshot-controller-56fcc65765-vtq6w" [b82e242a-6437-4df5-97bb-e56e5fbca8d2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0906 18:31:07.003065    8404 system_pods.go:89] "storage-provisioner" [3a6515a1-2d21-444d-a495-9e965ae45660] Running
	I0906 18:31:07.003076    8404 system_pods.go:126] duration metric: took 206.905916ms to wait for k8s-apps to be running ...
	I0906 18:31:07.003084    8404 system_svc.go:44] waiting for kubelet service to be running ....
	I0906 18:31:07.003158    8404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:31:07.036961    8404 system_svc.go:56] duration metric: took 33.866977ms WaitForService to wait for kubelet
	I0906 18:31:07.036996    8404 kubeadm.go:582] duration metric: took 28.95572667s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0906 18:31:07.037019    8404 node_conditions.go:102] verifying NodePressure condition ...
	I0906 18:31:07.052778    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.054932    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:07.203257    8404 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0906 18:31:07.203353    8404 node_conditions.go:123] node cpu capacity is 2
	I0906 18:31:07.203380    8404 node_conditions.go:105] duration metric: took 166.354883ms to run NodePressure ...
	I0906 18:31:07.203409    8404 start.go:241] waiting for startup goroutines ...
	I0906 18:31:07.324354    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:07.549674    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:07.551346    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:07.827073    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.053728    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.056368    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:08.324474    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:08.550038    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:08.552091    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:08.824873    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.054851    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.056222    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:09.324831    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:09.551599    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:09.552957    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:09.824495    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.109403    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.116948    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:10.324899    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:10.552515    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:10.553171    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:10.825191    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.050574    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.053942    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:11.324983    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:11.554041    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:11.563893    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:11.825014    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.054015    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.057163    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:12.325639    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:12.550696    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:12.554211    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:12.825550    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.049778    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.052835    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:13.325675    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:13.550455    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:13.553910    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:13.824873    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.049809    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.053077    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:14.323773    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:14.553512    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:14.555846    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:14.825034    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.076482    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.113006    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:15.324851    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:15.549687    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:15.551180    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:15.828860    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.053116    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.059827    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:16.325473    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:16.549901    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:16.551448    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:16.826831    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.051853    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.053165    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:17.326797    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:17.551227    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:17.552776    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:17.829418    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.052603    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:18.053915    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.325133    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:18.550346    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:18.552606    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0906 18:31:18.824354    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.050150    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.051928    8404 kapi.go:107] duration metric: took 30.003923855s to wait for kubernetes.io/minikube-addons=registry ...
	I0906 18:31:19.324644    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:19.552048    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:19.833395    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.051794    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.327177    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:20.552100    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:20.829775    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.053151    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.324665    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:21.549754    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:21.824761    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.051264    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.324160    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:22.550316    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:22.824679    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.049547    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.326163    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:23.550638    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:23.825020    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.051421    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.325185    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:24.549706    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:24.826421    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.053788    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.324481    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:25.549615    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:25.826525    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.050316    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.326483    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:26.550835    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:26.830834    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.053259    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.323671    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:27.549992    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:27.832508    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:28.050210    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.326658    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:28.550625    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:28.829289    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:29.050341    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.324105    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:29.550088    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:29.827482    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:30.061370    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.325490    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:30.549730    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:30.824015    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:31.050463    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.326024    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:31.550060    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:31.825690    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:32.050389    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.323878    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:32.549738    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:32.826173    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:33.052034    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.324738    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:33.549709    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:33.824862    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:34.053464    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.325488    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:34.550777    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:34.824261    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:35.053029    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.325619    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:35.550159    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:35.824343    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:36.049846    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.325025    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:36.550420    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:36.825262    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:37.050882    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.324393    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:37.550393    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:37.825237    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:38.055992    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.324799    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:38.553960    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:38.826408    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:39.050777    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.324225    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:39.549871    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:39.825381    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:40.060160    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:40.325195    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:40.550411    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:40.826481    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:41.051208    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:41.324297    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:41.552243    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:41.824859    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0906 18:31:42.052500    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:42.325081    8404 kapi.go:107] duration metric: took 52.505594245s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0906 18:31:42.549522    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:43.051025    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:43.549748    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:44.050838    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:44.549606    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:45.062566    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:45.549400    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:46.049158    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:46.549363    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:47.050393    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:47.549464    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:48.060808    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:48.549957    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:49.050566    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:49.549160    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:50.074320    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:50.550063    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:51.051842    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:51.549571    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:52.051599    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:52.552988    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:53.049817    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:53.549824    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:54.060184    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:54.552310    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:55.076070    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:55.553919    8404 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0906 18:31:56.059399    8404 kapi.go:107] duration metric: took 1m7.014192786s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0906 18:32:14.105646    8404 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0906 18:32:14.105667    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:14.584553    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:15.089135    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:15.585958    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:16.084593    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:16.584157    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:17.084870    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:17.585625    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:18.085445    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:18.584008    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:19.084599    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:19.584519    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:20.089944    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:20.584059    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:21.085320    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:21.585674    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:22.085632    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:22.588663    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:23.084694    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:23.585223    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:24.085715    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:24.585322    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:25.084150    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:25.585293    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:26.085134    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:26.584227    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:27.084960    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:27.585014    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:28.085627    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:28.584561    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:29.084080    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:29.585007    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:30.085742    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:30.584179    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:31.084822    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:31.585618    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:32.084714    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:32.584015    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:33.085424    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:33.585471    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:34.084883    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:34.584660    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:35.084771    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:35.584582    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:36.084681    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:36.584167    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:37.085329    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:37.585116    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:38.085545    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:38.584089    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:39.084525    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:39.585346    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:40.090378    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:40.584046    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:41.084721    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:41.586873    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:42.085517    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:42.584937    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:43.085136    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:43.585291    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:44.084918    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:44.584634    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:45.109751    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:45.584357    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:46.084680    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:46.584172    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:47.084673    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:47.584769    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:48.084977    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:48.584420    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:49.085024    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:49.584977    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:50.084788    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:50.584376    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:51.085575    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:51.586669    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:52.085958    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:52.584882    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:53.084546    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:53.584961    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:54.084969    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:54.585030    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:55.090747    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:55.585012    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:56.085116    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:56.584324    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:57.084980    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:57.586397    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:58.084526    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:58.584271    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:59.084376    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:32:59.585435    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:00.085826    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:00.584563    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:01.085466    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:01.586200    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:02.085025    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:02.584929    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:03.085099    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:03.584603    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:04.085028    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:04.585059    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:05.085300    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:05.585189    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:06.084873    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:06.584224    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:07.084758    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:07.585629    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:08.085253    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:08.585293    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:09.085022    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:09.584648    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:10.108153    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:10.584861    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:11.084191    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:11.592270    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:12.085986    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:12.585168    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:13.085161    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:13.585202    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:14.084904    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:14.584712    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:15.089873    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:15.584267    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:16.084866    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:16.584333    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:17.084459    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:17.584824    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:18.086754    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:18.584565    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:19.084527    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:19.584529    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:20.090601    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:20.584210    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:21.085066    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:21.587011    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:22.085254    8404 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0906 18:33:22.584526    8404 kapi.go:107] duration metric: took 2m31.503541357s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0906 18:33:22.586544    8404 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-663433 cluster.
	I0906 18:33:22.588513    8404 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0906 18:33:22.590327    8404 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0906 18:33:22.592051    8404 out.go:177] * Enabled addons: default-storageclass, ingress-dns, volcano, nvidia-device-plugin, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, yakd, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0906 18:33:22.593861    8404 addons.go:510] duration metric: took 2m44.512180223s for enable addons: enabled=[default-storageclass ingress-dns volcano nvidia-device-plugin storage-provisioner cloud-spanner metrics-server inspektor-gadget yakd storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0906 18:33:22.593906    8404 start.go:246] waiting for cluster config update ...
	I0906 18:33:22.593926    8404 start.go:255] writing updated cluster config ...
	I0906 18:33:22.594195    8404 ssh_runner.go:195] Run: rm -f paused
	I0906 18:33:22.925411    8404 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0906 18:33:22.927341    8404 out.go:177] * Done! kubectl is now configured to use "addons-663433" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	ab9291d384a4b       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   d258cd187df4e       gadget-c7f72
	3097820fe6b83       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   298335b8ec820       gcp-auth-89d5ffd79-cjd9f
	0133a153386b7       8b46b1cd48760       4 minutes ago       Running             admission                                0                   91d5dd23a2535       volcano-admission-77d7d48b68-mx4tr
	f4c93440b9201       289a818c8d9c5       4 minutes ago       Running             controller                               0                   41ac83e513747       ingress-nginx-controller-bc57996ff-bqhdx
	db03e541c3f5f       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   4f842b8c1f0eb       csi-hostpathplugin-zbfgn
	1b157df9469e9       420193b27261a       5 minutes ago       Exited              patch                                    2                   96c5c8063fd6f       ingress-nginx-admission-patch-r8p5w
	5daa66bbf18dc       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   4f842b8c1f0eb       csi-hostpathplugin-zbfgn
	f75f44130714e       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   4f842b8c1f0eb       csi-hostpathplugin-zbfgn
	4786de8b4e3b6       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   4f842b8c1f0eb       csi-hostpathplugin-zbfgn
	a47579427037d       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   4f842b8c1f0eb       csi-hostpathplugin-zbfgn
	2a7d703250114       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   8fdb012f962db       volcano-scheduler-576bc46687-2mczk
	48eabf3e5235e       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   de9ad3b00e743       csi-hostpath-resizer-0
	07e860558d363       420193b27261a       5 minutes ago       Exited              create                                   0                   b204cc628264a       ingress-nginx-admission-create-bdsm6
	6123e3953c93b       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   4f842b8c1f0eb       csi-hostpathplugin-zbfgn
	701a5dc65b9c5       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   d67abb8d8562e       volcano-controllers-56675bb4d5-47pmz
	0baaf922d53eb       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   94307f73e062b       csi-hostpath-attacher-0
	a59c0c7172a50       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   6141cd4201887       snapshot-controller-56fcc65765-n54gm
	854d2e1a64994       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   e2b6db26c48a4       metrics-server-84c5f94fbc-8t4sb
	6f671b5555f3e       6fed88f43b276       5 minutes ago       Running             registry                                 0                   af476ed3c85a5       registry-6fb4cdfc84-f8b5k
	b4b0f39ddfc1c       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   1a260994f6660       snapshot-controller-56fcc65765-vtq6w
	4afa5373ec1b1       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   d196b7daa02f7       local-path-provisioner-86d989889c-9pj4d
	0c8a94c4eb1ff       77bdba588b953       5 minutes ago       Running             yakd                                     0                   e1328928ab20b       yakd-dashboard-67d98fc6b-ggxs9
	7f22e8fbc2c1d       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   5ac1412013378       registry-proxy-mhbhh
	46acb4bb2bd31       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   5554cd545a859       nvidia-device-plugin-daemonset-2qpzm
	a116951bd9df9       2437cf7621777       5 minutes ago       Running             coredns                                  0                   d1fd71ac9c8cd       coredns-6f6b679f8f-hmcbm
	9a0e8a08c24ae       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   9e48d922173f6       cloud-spanner-emulator-769b77f747-snbcd
	5f223276a9bd8       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   efc080cdaad07       kube-ingress-dns-minikube
	3aa94d7e74a8a       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   7eb90679c04f1       storage-provisioner
	f05250959fc29       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   9a9930b92b8a1       kindnet-fhj4d
	76a7321f00453       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   a9b4c1e18c1a5       kube-proxy-sxwfw
	1e5db3a1ca12e       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   0869f23964f46       kube-apiserver-addons-663433
	ba192ab2d9de5       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   30ffa9a3825f7       kube-controller-manager-addons-663433
	fe0ad024cb706       27e3830e14027       6 minutes ago       Running             etcd                                     0                   6b1d6b5937fa0       etcd-addons-663433
	aedd1bd565597       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   a7451f32a1c44       kube-scheduler-addons-663433
	
	
	==> containerd <==
	Sep 06 18:33:32 addons-663433 containerd[816]: time="2024-09-06T18:33:32.400349351Z" level=info msg="RemovePodSandbox \"52e9d29f0e84d814a5c6f10eaa45037b3be0df1f2efcd514c88003e9f7eaa6ef\" returns successfully"
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.320156584Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.438006944Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.439582862Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.443224588Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 123.014325ms"
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.443396957Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.445948117Z" level=info msg="CreateContainer within sandbox \"d258cd187df4e64126e7089f6a286d63db730987430a6229ec3c329425adc4c1\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.468006815Z" level=info msg="CreateContainer within sandbox \"d258cd187df4e64126e7089f6a286d63db730987430a6229ec3c329425adc4c1\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e\""
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.472105625Z" level=info msg="StartContainer for \"ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e\""
	Sep 06 18:34:28 addons-663433 containerd[816]: time="2024-09-06T18:34:28.533130513Z" level=info msg="StartContainer for \"ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e\" returns successfully"
	Sep 06 18:34:30 addons-663433 containerd[816]: time="2024-09-06T18:34:30.145772566Z" level=info msg="shim disconnected" id=ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e namespace=k8s.io
	Sep 06 18:34:30 addons-663433 containerd[816]: time="2024-09-06T18:34:30.145876864Z" level=warning msg="cleaning up after shim disconnected" id=ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e namespace=k8s.io
	Sep 06 18:34:30 addons-663433 containerd[816]: time="2024-09-06T18:34:30.145914633Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 06 18:34:30 addons-663433 containerd[816]: time="2024-09-06T18:34:30.511526407Z" level=info msg="RemoveContainer for \"593af3a4cbcb7fb5793287b04e0a6b083980b6d2ba7acdf71b651b9efb83f701\""
	Sep 06 18:34:30 addons-663433 containerd[816]: time="2024-09-06T18:34:30.520333551Z" level=info msg="RemoveContainer for \"593af3a4cbcb7fb5793287b04e0a6b083980b6d2ba7acdf71b651b9efb83f701\" returns successfully"
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.404021956Z" level=info msg="RemoveContainer for \"43b6664371501538a078964eca44fa852662ddaba667042ed84373d3a99dbdd1\""
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.411092497Z" level=info msg="RemoveContainer for \"43b6664371501538a078964eca44fa852662ddaba667042ed84373d3a99dbdd1\" returns successfully"
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.413222717Z" level=info msg="StopPodSandbox for \"7107e87afa54724938d7e5724491693f8d0a306e4cdaee398de2a745ab384315\""
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.420763701Z" level=info msg="TearDown network for sandbox \"7107e87afa54724938d7e5724491693f8d0a306e4cdaee398de2a745ab384315\" successfully"
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.420802586Z" level=info msg="StopPodSandbox for \"7107e87afa54724938d7e5724491693f8d0a306e4cdaee398de2a745ab384315\" returns successfully"
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.421310412Z" level=info msg="RemovePodSandbox for \"7107e87afa54724938d7e5724491693f8d0a306e4cdaee398de2a745ab384315\""
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.421351447Z" level=info msg="Forcibly stopping sandbox \"7107e87afa54724938d7e5724491693f8d0a306e4cdaee398de2a745ab384315\""
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.429064644Z" level=info msg="TearDown network for sandbox \"7107e87afa54724938d7e5724491693f8d0a306e4cdaee398de2a745ab384315\" successfully"
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.435161274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7107e87afa54724938d7e5724491693f8d0a306e4cdaee398de2a745ab384315\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 06 18:34:32 addons-663433 containerd[816]: time="2024-09-06T18:34:32.435287250Z" level=info msg="RemovePodSandbox \"7107e87afa54724938d7e5724491693f8d0a306e4cdaee398de2a745ab384315\" returns successfully"
	
	
	==> coredns [a116951bd9df9c5e19aff93150eb70c0c6a213e6882bd1beece75d800860f6b4] <==
	[INFO] 10.244.0.3:45114 - 18266 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032624s
	[INFO] 10.244.0.3:44880 - 5604 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001487834s
	[INFO] 10.244.0.3:44880 - 31206 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001028217s
	[INFO] 10.244.0.3:53805 - 39976 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000052743s
	[INFO] 10.244.0.3:53805 - 42542 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000038614s
	[INFO] 10.244.0.3:56962 - 18517 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000106274s
	[INFO] 10.244.0.3:56962 - 5462 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000171589s
	[INFO] 10.244.0.3:53168 - 6740 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006382s
	[INFO] 10.244.0.3:53168 - 21078 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000079985s
	[INFO] 10.244.0.3:34050 - 20053 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000050216s
	[INFO] 10.244.0.3:34050 - 23891 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000032657s
	[INFO] 10.244.0.3:34630 - 60705 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001450951s
	[INFO] 10.244.0.3:34630 - 57891 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001473368s
	[INFO] 10.244.0.3:33501 - 62153 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000041633s
	[INFO] 10.244.0.3:33501 - 60619 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000196015s
	[INFO] 10.244.0.24:60121 - 25747 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000348552s
	[INFO] 10.244.0.24:56521 - 47365 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.001058991s
	[INFO] 10.244.0.24:53384 - 4655 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000142985s
	[INFO] 10.244.0.24:48549 - 10937 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000118255s
	[INFO] 10.244.0.24:46153 - 40634 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109615s
	[INFO] 10.244.0.24:54153 - 54733 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124007s
	[INFO] 10.244.0.24:45997 - 14352 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002173311s
	[INFO] 10.244.0.24:32954 - 12156 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002062704s
	[INFO] 10.244.0.24:50700 - 45915 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001670507s
	[INFO] 10.244.0.24:60224 - 18726 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 572 0.001768602s
	
	
	==> describe nodes <==
	Name:               addons-663433
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-663433
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e6b6435971a63e36b5096cd544634422129cef13
	                    minikube.k8s.io/name=addons-663433
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_06T18_30_33_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-663433
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-663433"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 06 Sep 2024 18:30:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-663433
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 06 Sep 2024 18:36:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 06 Sep 2024 18:33:36 +0000   Fri, 06 Sep 2024 18:30:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 06 Sep 2024 18:33:36 +0000   Fri, 06 Sep 2024 18:30:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 06 Sep 2024 18:33:36 +0000   Fri, 06 Sep 2024 18:30:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 06 Sep 2024 18:33:36 +0000   Fri, 06 Sep 2024 18:30:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-663433
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d9eb6d5bed9c49b8a61742cb4f2cc58a
	  System UUID:                120b3255-efd9-4064-a9ac-5e7f560d0e42
	  Boot ID:                    6d654be1-742f-4cf7-8c76-51d1d9beb0a5
	  Kernel Version:             5.15.0-1068-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.21
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-snbcd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-c7f72                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-cjd9f                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-bqhdx    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-6f6b679f8f-hmcbm                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m4s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-zbfgn                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-663433                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m9s
	  kube-system                 kindnet-fhj4d                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m4s
	  kube-system                 kube-apiserver-addons-663433                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-controller-manager-addons-663433       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-proxy-sxwfw                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-addons-663433                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 metrics-server-84c5f94fbc-8t4sb             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-2qpzm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-6fb4cdfc84-f8b5k                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-mhbhh                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-n54gm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 snapshot-controller-56fcc65765-vtq6w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-9pj4d     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-mx4tr          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-47pmz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-2mczk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-ggxs9              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m2s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m17s (x8 over 6m17s)  kubelet          Node addons-663433 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m17s (x7 over 6m17s)  kubelet          Node addons-663433 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m17s (x7 over 6m17s)  kubelet          Node addons-663433 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m9s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m9s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m9s                   kubelet          Node addons-663433 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m9s                   kubelet          Node addons-663433 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m9s                   kubelet          Node addons-663433 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m5s                   node-controller  Node addons-663433 event: Registered Node addons-663433 in Controller
	
	
	==> dmesg <==
	[Sep 6 18:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014989] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.469544] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.754529] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.376876] kauditd_printk_skb: 36 callbacks suppressed
	
	
	==> etcd [fe0ad024cb7061b7ad57ba8197d48ad65bb848db9d154d37f3830b431b602939] <==
	{"level":"info","ts":"2024-09-06T18:30:25.691205Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-06T18:30:25.700960Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-06T18:30:25.701185Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-06T18:30:25.700718Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-06T18:30:25.702409Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-06T18:30:26.448475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:26.448699Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:26.448799Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-06T18:30:26.448905Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:26.448979Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:26.449071Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:26.449156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-06T18:30:26.452570Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:26.456673Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-663433 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-06T18:30:26.456951Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:30:26.458067Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:30:26.459262Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-06T18:30:26.460475Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-06T18:30:26.460936Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-06T18:30:26.461084Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-06T18:30:26.460546Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:26.464712Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:26.464888Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-06T18:30:26.465766Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-06T18:30:26.469549Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> gcp-auth [3097820fe6b839752bd2c90b441e2bf982d77528da78783b9e17e154db05612d] <==
	2024/09/06 18:33:21 GCP Auth Webhook started!
	2024/09/06 18:33:39 Ready to marshal response ...
	2024/09/06 18:33:39 Ready to write response ...
	2024/09/06 18:33:40 Ready to marshal response ...
	2024/09/06 18:33:40 Ready to write response ...
	
	
	==> kernel <==
	 18:36:41 up 19 min,  0 users,  load average: 1.01, 0.95, 0.57
	Linux addons-663433 5.15.0-1068-aws #74~20.04.1-Ubuntu SMP Tue Aug 6 19:45:17 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f05250959fc293eeb332b4699e53ccbb173edb7c17f70eecbcd54a9699aefa19] <==
	I0906 18:34:41.221804       1 main.go:299] handling current node
	I0906 18:34:51.224612       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:34:51.224675       1 main.go:299] handling current node
	I0906 18:35:01.230217       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:35:01.230253       1 main.go:299] handling current node
	I0906 18:35:11.227972       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:35:11.228019       1 main.go:299] handling current node
	I0906 18:35:21.228153       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:35:21.228196       1 main.go:299] handling current node
	I0906 18:35:31.228129       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:35:31.228392       1 main.go:299] handling current node
	I0906 18:35:41.220949       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:35:41.220996       1 main.go:299] handling current node
	I0906 18:35:51.223288       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:35:51.223420       1 main.go:299] handling current node
	I0906 18:36:01.221013       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:36:01.221051       1 main.go:299] handling current node
	I0906 18:36:11.222900       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:36:11.222940       1 main.go:299] handling current node
	I0906 18:36:21.228509       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:36:21.228542       1 main.go:299] handling current node
	I0906 18:36:31.229524       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:36:31.229560       1 main.go:299] handling current node
	I0906 18:36:41.220926       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0906 18:36:41.220967       1 main.go:299] handling current node
	
	
	==> kube-apiserver [1e5db3a1ca12eb0732d9b566d032f0a3d9477d70bfcb7e779ad4b563e3b3943d] <==
	W0906 18:31:51.049679       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:52.101067       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:53.184915       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:54.079015       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.186.38:443: connect: connection refused
	E0906 18:31:54.079056       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.186.38:443: connect: connection refused" logger="UnhandledError"
	W0906 18:31:54.080914       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:54.100746       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.186.38:443: connect: connection refused
	E0906 18:31:54.100785       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.186.38:443: connect: connection refused" logger="UnhandledError"
	W0906 18:31:54.102506       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:54.219320       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:55.274519       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:56.354037       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:57.454921       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:58.516723       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:31:59.534124       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:32:00.636348       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:32:01.723696       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.100.63.23:443: connect: connection refused
	W0906 18:32:14.038026       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.186.38:443: connect: connection refused
	E0906 18:32:14.038291       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.186.38:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:54.092186       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.186.38:443: connect: connection refused
	E0906 18:32:54.092231       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.186.38:443: connect: connection refused" logger="UnhandledError"
	W0906 18:32:54.111958       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.105.186.38:443: connect: connection refused
	E0906 18:32:54.112001       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.105.186.38:443: connect: connection refused" logger="UnhandledError"
	I0906 18:33:39.474508       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0906 18:33:39.523434       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [ba192ab2d9de552586ee6afee81104e1637d6b4262d54250a6eff72e7a21ad36] <==
	I0906 18:32:54.121105       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:54.125364       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:54.136304       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:54.136740       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:54.151853       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:54.156732       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:54.165359       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:55.177701       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:55.193655       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:56.292742       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:56.308347       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:57.299482       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:57.308844       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:57.316282       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:57.322657       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0906 18:32:57.332924       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:32:57.336052       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0906 18:33:22.325375       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="13.79161ms"
	I0906 18:33:22.325974       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="43.866µs"
	I0906 18:33:27.042634       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0906 18:33:27.046907       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0906 18:33:27.097568       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0906 18:33:27.098703       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0906 18:33:36.078089       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-663433"
	I0906 18:33:39.195260       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [76a7321f004530d3c73faa560da1ddd982a157def797898a5b06c3a582c83780] <==
	I0906 18:30:39.077340       1 server_linux.go:66] "Using iptables proxy"
	I0906 18:30:39.218810       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0906 18:30:39.218877       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0906 18:30:39.266357       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0906 18:30:39.266410       1 server_linux.go:169] "Using iptables Proxier"
	I0906 18:30:39.269292       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0906 18:30:39.269943       1 server.go:483] "Version info" version="v1.31.0"
	I0906 18:30:39.269971       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0906 18:30:39.286053       1 config.go:197] "Starting service config controller"
	I0906 18:30:39.286089       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0906 18:30:39.286165       1 config.go:104] "Starting endpoint slice config controller"
	I0906 18:30:39.286176       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0906 18:30:39.288249       1 config.go:326] "Starting node config controller"
	I0906 18:30:39.288268       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0906 18:30:39.388559       1 shared_informer.go:320] Caches are synced for service config
	I0906 18:30:39.388628       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0906 18:30:39.388678       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [aedd1bd565597b0c8c8bfb6760ba0b5fa5f7460ae23ab80814169ab51116b39d] <==
	E0906 18:30:30.610462       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.609510       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:30.610691       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.609548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0906 18:30:30.610900       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0906 18:30:30.611065       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.611557       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0906 18:30:30.611703       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.611861       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0906 18:30:30.611954       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.612128       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0906 18:30:30.612220       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.612400       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:30.612554       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.612862       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0906 18:30:30.613175       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.613040       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0906 18:30:30.613136       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0906 18:30:30.613528       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	E0906 18:30:30.613627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.613863       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0906 18:30:30.614004       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0906 18:30:30.615336       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0906 18:30:30.615610       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0906 18:30:32.206785       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 06 18:34:32 addons-663433 kubelet[1507]: E0906 18:34:32.517591    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:34:39 addons-663433 kubelet[1507]: I0906 18:34:39.318743    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mhbhh" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:34:46 addons-663433 kubelet[1507]: I0906 18:34:46.318486    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-2qpzm" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:34:47 addons-663433 kubelet[1507]: I0906 18:34:47.318944    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:34:47 addons-663433 kubelet[1507]: E0906 18:34:47.319144    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:34:59 addons-663433 kubelet[1507]: I0906 18:34:59.318637    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-f8b5k" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:35:01 addons-663433 kubelet[1507]: I0906 18:35:01.319219    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:35:01 addons-663433 kubelet[1507]: E0906 18:35:01.319849    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:35:13 addons-663433 kubelet[1507]: I0906 18:35:13.318965    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:35:13 addons-663433 kubelet[1507]: E0906 18:35:13.319193    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:35:26 addons-663433 kubelet[1507]: I0906 18:35:26.318883    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:35:26 addons-663433 kubelet[1507]: E0906 18:35:26.319050    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:35:41 addons-663433 kubelet[1507]: I0906 18:35:41.318644    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:35:41 addons-663433 kubelet[1507]: E0906 18:35:41.319339    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:35:47 addons-663433 kubelet[1507]: I0906 18:35:47.318906    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-2qpzm" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:35:56 addons-663433 kubelet[1507]: I0906 18:35:56.319148    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:35:56 addons-663433 kubelet[1507]: E0906 18:35:56.319331    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:36:02 addons-663433 kubelet[1507]: I0906 18:36:02.319755    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-mhbhh" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:36:08 addons-663433 kubelet[1507]: I0906 18:36:08.320649    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:36:08 addons-663433 kubelet[1507]: E0906 18:36:08.320862    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:36:20 addons-663433 kubelet[1507]: I0906 18:36:20.319500    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:36:20 addons-663433 kubelet[1507]: E0906 18:36:20.319691    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	Sep 06 18:36:25 addons-663433 kubelet[1507]: I0906 18:36:25.318511    1507 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-f8b5k" secret="" err="secret \"gcp-auth\" not found"
	Sep 06 18:36:34 addons-663433 kubelet[1507]: I0906 18:36:34.319205    1507 scope.go:117] "RemoveContainer" containerID="ab9291d384a4bf12ce5b77c0e45c05002fd947f6d0a8c7f9ccb90b431d18756e"
	Sep 06 18:36:34 addons-663433 kubelet[1507]: E0906 18:36:34.319413    1507 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-c7f72_gadget(b9684e75-d070-484a-88cc-d6beddb698b3)\"" pod="gadget/gadget-c7f72" podUID="b9684e75-d070-484a-88cc-d6beddb698b3"
	
	
	==> storage-provisioner [3aa94d7e74a8a3a9fb838c1947cc61330c121af9c99aefddca62c396012892a5] <==
	I0906 18:30:44.360705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0906 18:30:44.410552       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0906 18:30:44.410631       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0906 18:30:44.431751       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0906 18:30:44.431922       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-663433_eebfc44c-567f-43de-b03a-d6b2547cd2ef!
	I0906 18:30:44.439682       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"62b99d9c-3ac5-45c4-ba24-30209dff6c89", APIVersion:"v1", ResourceVersion:"589", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-663433_eebfc44c-567f-43de-b03a-d6b2547cd2ef became leader
	I0906 18:30:44.534716       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-663433_eebfc44c-567f-43de-b03a-d6b2547cd2ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-663433 -n addons-663433
helpers_test.go:261: (dbg) Run:  kubectl --context addons-663433 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-bdsm6 ingress-nginx-admission-patch-r8p5w test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-663433 describe pod ingress-nginx-admission-create-bdsm6 ingress-nginx-admission-patch-r8p5w test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-663433 describe pod ingress-nginx-admission-create-bdsm6 ingress-nginx-admission-patch-r8p5w test-job-nginx-0: exit status 1 (95.840682ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-bdsm6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-r8p5w" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-663433 describe pod ingress-nginx-admission-create-bdsm6 ingress-nginx-admission-patch-r8p5w test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.09s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
functional_test.go:2288: (dbg) Non-zero exit: out/minikube-linux-arm64 license: exit status 40 (179.232839ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to INET_LICENSES: Failed to download licenses: download request did not return a 200, received: 404
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_license_42713f820c0ac68901ecf7b12bfdf24c2cafe65d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2289: command "\n\n" failed: exit status 40
--- FAIL: TestFunctional/parallel/License (0.18s)

                                                
                                    

Test pass (298/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.72
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 6.69
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.08
18 TestDownloadOnly/v1.31.0/DeleteAll 0.21
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.12
21 TestBinaryMirror 0.56
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.09
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.1
27 TestAddons/Setup 220.89
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 16.4
34 TestAddons/parallel/Ingress 19.06
35 TestAddons/parallel/InspektorGadget 11.93
36 TestAddons/parallel/MetricsServer 5.81
39 TestAddons/parallel/CSI 58.68
40 TestAddons/parallel/Headlamp 15.81
41 TestAddons/parallel/CloudSpanner 6.61
42 TestAddons/parallel/LocalPath 51.61
43 TestAddons/parallel/NvidiaDevicePlugin 5.5
44 TestAddons/parallel/Yakd 11.75
45 TestAddons/StoppedEnableDisable 12.26
46 TestCertOptions 36.19
47 TestCertExpiration 228.18
49 TestForceSystemdFlag 36.77
50 TestForceSystemdEnv 39.58
51 TestDockerEnvContainerd 44.46
56 TestErrorSpam/setup 29.54
57 TestErrorSpam/start 0.81
58 TestErrorSpam/status 1.14
59 TestErrorSpam/pause 1.83
60 TestErrorSpam/unpause 1.76
61 TestErrorSpam/stop 1.43
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 54.07
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 5.94
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.12
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.19
73 TestFunctional/serial/CacheCmd/cache/add_local 1.25
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2
78 TestFunctional/serial/CacheCmd/cache/delete 0.1
79 TestFunctional/serial/MinikubeKubectlCmd 0.15
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 40.91
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.72
84 TestFunctional/serial/LogsFileCmd 1.79
85 TestFunctional/serial/InvalidService 4.21
87 TestFunctional/parallel/ConfigCmd 0.46
88 TestFunctional/parallel/DashboardCmd 10.65
89 TestFunctional/parallel/DryRun 0.44
90 TestFunctional/parallel/InternationalLanguage 0.17
91 TestFunctional/parallel/StatusCmd 1.21
95 TestFunctional/parallel/ServiceCmdConnect 9.7
96 TestFunctional/parallel/AddonsCmd 0.18
97 TestFunctional/parallel/PersistentVolumeClaim 25.07
99 TestFunctional/parallel/SSHCmd 0.68
100 TestFunctional/parallel/CpCmd 2.37
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 2.03
107 TestFunctional/parallel/NodeLabels 0.1
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
113 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.56
114 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.52
117 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
118 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
122 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
123 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
124 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
125 TestFunctional/parallel/ServiceCmd/List 0.59
126 TestFunctional/parallel/ProfileCmd/profile_list 0.45
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.59
130 TestFunctional/parallel/MountCmd/any-port 8.57
131 TestFunctional/parallel/ServiceCmd/Format 0.38
132 TestFunctional/parallel/ServiceCmd/URL 0.38
133 TestFunctional/parallel/MountCmd/specific-port 2.24
134 TestFunctional/parallel/MountCmd/VerifyCleanup 2.31
135 TestFunctional/parallel/Version/short 0.07
136 TestFunctional/parallel/Version/components 1.33
137 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
138 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
139 TestFunctional/parallel/ImageCommands/ImageListJson 0.3
140 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
141 TestFunctional/parallel/ImageCommands/ImageBuild 3.62
142 TestFunctional/parallel/ImageCommands/Setup 0.94
143 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
144 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.23
145 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.19
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.48
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.38
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.5
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.36
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 121.48
160 TestMultiControlPlane/serial/DeployApp 33.27
161 TestMultiControlPlane/serial/PingHostFromPods 2.14
162 TestMultiControlPlane/serial/AddWorkerNode 21.76
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.76
165 TestMultiControlPlane/serial/CopyFile 18.93
166 TestMultiControlPlane/serial/StopSecondaryNode 12.88
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 17.95
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 143.85
171 TestMultiControlPlane/serial/DeleteSecondaryNode 11.12
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
173 TestMultiControlPlane/serial/StopCluster 36.04
174 TestMultiControlPlane/serial/RestartCluster 78.39
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.61
176 TestMultiControlPlane/serial/AddSecondaryNode 38.75
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.79
181 TestJSONOutput/start/Command 50.52
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.77
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.69
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.78
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.21
206 TestKicCustomNetwork/create_custom_network 39.5
207 TestKicCustomNetwork/use_default_bridge_network 30.24
208 TestKicExistingNetwork 34.05
209 TestKicCustomSubnet 34.4
210 TestKicStaticIP 32.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 71.76
215 TestMountStart/serial/StartWithMountFirst 9.78
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 6.27
218 TestMountStart/serial/VerifyMountSecond 0.32
219 TestMountStart/serial/DeleteFirst 1.63
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 8.34
223 TestMountStart/serial/VerifyMountPostStop 0.25
226 TestMultiNode/serial/FreshStart2Nodes 68.75
227 TestMultiNode/serial/DeployApp2Nodes 14.65
228 TestMultiNode/serial/PingHostFrom2Pods 1.03
229 TestMultiNode/serial/AddNode 16.5
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.36
232 TestMultiNode/serial/CopyFile 9.75
233 TestMultiNode/serial/StopNode 2.84
234 TestMultiNode/serial/StartAfterStop 9.77
235 TestMultiNode/serial/RestartKeepsNodes 89.03
236 TestMultiNode/serial/DeleteNode 5.41
237 TestMultiNode/serial/StopMultiNode 24.39
238 TestMultiNode/serial/RestartMultiNode 48.39
239 TestMultiNode/serial/ValidateNameConflict 37.52
244 TestPreload 114.88
246 TestScheduledStopUnix 108.37
249 TestInsufficientStorage 10.5
250 TestRunningBinaryUpgrade 77.63
252 TestKubernetesUpgrade 105.04
253 TestMissingContainerUpgrade 189.05
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
256 TestNoKubernetes/serial/StartWithK8s 38.9
257 TestNoKubernetes/serial/StartWithStopK8s 18.35
258 TestNoKubernetes/serial/Start 9.38
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.34
260 TestNoKubernetes/serial/ProfileList 0.96
261 TestNoKubernetes/serial/Stop 1.21
262 TestNoKubernetes/serial/StartNoArgs 6.82
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.28
264 TestStoppedBinaryUpgrade/Setup 1.29
265 TestStoppedBinaryUpgrade/Upgrade 121.41
274 TestPause/serial/Start 75.53
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.64
276 TestPause/serial/SecondStartNoReconfiguration 7.17
280 TestPause/serial/Pause 0.93
281 TestPause/serial/VerifyStatus 0.46
286 TestNetworkPlugins/group/false 5.09
287 TestPause/serial/Unpause 0.79
288 TestPause/serial/PauseAgain 1.13
289 TestPause/serial/DeletePaused 2.99
293 TestPause/serial/VerifyDeletedResources 0.15
295 TestStartStop/group/old-k8s-version/serial/FirstStart 171.34
297 TestStartStop/group/no-preload/serial/FirstStart 70.1
298 TestStartStop/group/old-k8s-version/serial/DeployApp 8.76
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.77
300 TestStartStop/group/old-k8s-version/serial/Stop 12.42
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.26
302 TestStartStop/group/old-k8s-version/serial/SecondStart 377.01
303 TestStartStop/group/no-preload/serial/DeployApp 9.39
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
305 TestStartStop/group/no-preload/serial/Stop 12.14
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
307 TestStartStop/group/no-preload/serial/SecondStart 267.18
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
309 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.12
310 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
311 TestStartStop/group/no-preload/serial/Pause 3.12
313 TestStartStop/group/embed-certs/serial/FirstStart 49.55
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
316 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
317 TestStartStop/group/old-k8s-version/serial/Pause 3.24
318 TestStartStop/group/embed-certs/serial/DeployApp 9.37
320 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 68.18
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.54
322 TestStartStop/group/embed-certs/serial/Stop 12.67
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
324 TestStartStop/group/embed-certs/serial/SecondStart 281.85
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.37
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.17
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 270.54
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.16
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
333 TestStartStop/group/embed-certs/serial/Pause 3.04
335 TestStartStop/group/newest-cni/serial/FirstStart 42.65
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.48
338 TestStartStop/group/newest-cni/serial/Stop 1.38
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.26
341 TestStartStop/group/newest-cni/serial/SecondStart 20.33
342 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
343 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.34
344 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.69
345 TestNetworkPlugins/group/auto/Start 58.61
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
349 TestStartStop/group/newest-cni/serial/Pause 3.61
350 TestNetworkPlugins/group/kindnet/Start 58.67
351 TestNetworkPlugins/group/auto/KubeletFlags 0.29
352 TestNetworkPlugins/group/auto/NetCatPod 10.27
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestNetworkPlugins/group/auto/DNS 0.18
355 TestNetworkPlugins/group/auto/Localhost 0.16
356 TestNetworkPlugins/group/auto/HairPin 0.16
357 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
358 TestNetworkPlugins/group/kindnet/NetCatPod 10.27
359 TestNetworkPlugins/group/kindnet/DNS 0.24
360 TestNetworkPlugins/group/kindnet/Localhost 0.29
361 TestNetworkPlugins/group/kindnet/HairPin 0.27
362 TestNetworkPlugins/group/flannel/Start 58.47
363 TestNetworkPlugins/group/enable-default-cni/Start 79.26
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
366 TestNetworkPlugins/group/flannel/NetCatPod 9.25
367 TestNetworkPlugins/group/flannel/DNS 0.2
368 TestNetworkPlugins/group/flannel/Localhost 0.16
369 TestNetworkPlugins/group/flannel/HairPin 0.2
370 TestNetworkPlugins/group/bridge/Start 86.31
371 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
372 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.32
373 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
374 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
375 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
376 TestNetworkPlugins/group/calico/Start 67.22
377 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
378 TestNetworkPlugins/group/bridge/NetCatPod 10.41
379 TestNetworkPlugins/group/bridge/DNS 0.18
380 TestNetworkPlugins/group/bridge/Localhost 0.15
381 TestNetworkPlugins/group/bridge/HairPin 0.17
382 TestNetworkPlugins/group/calico/ControllerPod 6.01
383 TestNetworkPlugins/group/calico/KubeletFlags 0.34
384 TestNetworkPlugins/group/calico/NetCatPod 10.41
385 TestNetworkPlugins/group/custom-flannel/Start 61.43
386 TestNetworkPlugins/group/calico/DNS 0.32
387 TestNetworkPlugins/group/calico/Localhost 0.2
388 TestNetworkPlugins/group/calico/HairPin 0.2
389 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
390 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.26
391 TestNetworkPlugins/group/custom-flannel/DNS 0.21
392 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
393 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (8.72s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-998007 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-998007 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.7237872s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.72s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-998007
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-998007: exit status 85 (70.218342ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-998007 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |          |
	|         | -p download-only-998007        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:24
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:24.379299    7652 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:24.379472    7652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:24.379483    7652 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:24.379488    7652 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:24.379744    7652 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	W0906 18:29:24.379880    7652 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19576-2243/.minikube/config/config.json: open /home/jenkins/minikube-integration/19576-2243/.minikube/config/config.json: no such file or directory
	I0906 18:29:24.380297    7652 out.go:352] Setting JSON to true
	I0906 18:29:24.381202    7652 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":713,"bootTime":1725646652,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0906 18:29:24.381285    7652 start.go:139] virtualization:  
	I0906 18:29:24.384142    7652 out.go:97] [download-only-998007] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0906 18:29:24.384399    7652 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19576-2243/.minikube/cache/preloaded-tarball: no such file or directory
	I0906 18:29:24.384462    7652 notify.go:220] Checking for updates...
	I0906 18:29:24.386862    7652 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:24.389091    7652 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:24.390709    7652 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	I0906 18:29:24.392591    7652 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	I0906 18:29:24.394658    7652 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0906 18:29:24.398100    7652 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 18:29:24.398399    7652 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:24.428568    7652 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:29:24.428666    7652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:24.779970    7652 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-06 18:29:24.76948375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:24.780075    7652 docker.go:318] overlay module found
	I0906 18:29:24.782546    7652 out.go:97] Using the docker driver based on user configuration
	I0906 18:29:24.782570    7652 start.go:297] selected driver: docker
	I0906 18:29:24.782577    7652 start.go:901] validating driver "docker" against <nil>
	I0906 18:29:24.782676    7652 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:24.840363    7652 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-06 18:29:24.831096264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:24.840542    7652 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:24.840836    7652 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0906 18:29:24.840994    7652 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 18:29:24.843028    7652 out.go:169] Using Docker driver with root privileges
	I0906 18:29:24.845074    7652 cni.go:84] Creating CNI manager for ""
	I0906 18:29:24.845098    7652 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0906 18:29:24.845110    7652 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:24.845214    7652 start.go:340] cluster config:
	{Name:download-only-998007 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-998007 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:29:24.847153    7652 out.go:97] Starting "download-only-998007" primary control-plane node in "download-only-998007" cluster
	I0906 18:29:24.847188    7652 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0906 18:29:24.849121    7652 out.go:97] Pulling base image v0.0.45 ...
	I0906 18:29:24.849148    7652 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0906 18:29:24.849304    7652 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0906 18:29:24.864560    7652 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0906 18:29:24.864729    7652 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0906 18:29:24.864824    7652 image.go:148] Writing gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0906 18:29:24.910441    7652 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0906 18:29:24.910468    7652 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:24.910612    7652 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0906 18:29:24.912618    7652 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0906 18:29:24.912640    7652 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0906 18:29:24.998473    7652 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19576-2243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-998007 host does not exist
	  To start a cluster, run: "minikube start -p download-only-998007"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-998007
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (6.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-582754 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-582754 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.686842706s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (6.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-582754
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-582754: exit status 85 (75.144173ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-998007 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-998007        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| delete  | -p download-only-998007        | download-only-998007 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC | 06 Sep 24 18:29 UTC |
	| start   | -o=json --download-only        | download-only-582754 | jenkins | v1.34.0 | 06 Sep 24 18:29 UTC |                     |
	|         | -p download-only-582754        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/06 18:29:33
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0906 18:29:33.498862    7848 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:29:33.498996    7848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:33.499006    7848 out.go:358] Setting ErrFile to fd 2...
	I0906 18:29:33.499012    7848 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:29:33.499247    7848 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 18:29:33.499744    7848 out.go:352] Setting JSON to true
	I0906 18:29:33.500482    7848 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":722,"bootTime":1725646652,"procs":146,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0906 18:29:33.500544    7848 start.go:139] virtualization:  
	I0906 18:29:33.502847    7848 out.go:97] [download-only-582754] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0906 18:29:33.502998    7848 notify.go:220] Checking for updates...
	I0906 18:29:33.505313    7848 out.go:169] MINIKUBE_LOCATION=19576
	I0906 18:29:33.507222    7848 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:29:33.509044    7848 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	I0906 18:29:33.510891    7848 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	I0906 18:29:33.512535    7848 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0906 18:29:33.516206    7848 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0906 18:29:33.516499    7848 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:29:33.550372    7848 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:29:33.550485    7848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:33.618576    7848 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-06 18:29:33.608831998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:33.618691    7848 docker.go:318] overlay module found
	I0906 18:29:33.621017    7848 out.go:97] Using the docker driver based on user configuration
	I0906 18:29:33.621045    7848 start.go:297] selected driver: docker
	I0906 18:29:33.621053    7848 start.go:901] validating driver "docker" against <nil>
	I0906 18:29:33.621156    7848 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:29:33.672844    7848 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-06 18:29:33.664095469 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:29:33.673007    7848 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0906 18:29:33.673291    7848 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0906 18:29:33.673447    7848 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0906 18:29:33.676006    7848 out.go:169] Using Docker driver with root privileges
	I0906 18:29:33.678044    7848 cni.go:84] Creating CNI manager for ""
	I0906 18:29:33.678064    7848 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0906 18:29:33.678075    7848 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0906 18:29:33.678145    7848 start.go:340] cluster config:
	{Name:download-only-582754 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-582754 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:container
d CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:29:33.680176    7848 out.go:97] Starting "download-only-582754" primary control-plane node in "download-only-582754" cluster
	I0906 18:29:33.680199    7848 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0906 18:29:33.682127    7848 out.go:97] Pulling base image v0.0.45 ...
	I0906 18:29:33.682170    7848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0906 18:29:33.682197    7848 image.go:79] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local docker daemon
	I0906 18:29:33.697289    7848 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 to local cache
	I0906 18:29:33.697426    7848 image.go:63] Checking for gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory
	I0906 18:29:33.697447    7848 image.go:66] Found gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 in local cache directory, skipping pull
	I0906 18:29:33.697457    7848 image.go:135] gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 exists in cache, skipping pull
	I0906 18:29:33.697465    7848 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 as a tarball
	I0906 18:29:33.736833    7848 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0906 18:29:33.736877    7848 cache.go:56] Caching tarball of preloaded images
	I0906 18:29:33.737042    7848 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0906 18:29:33.739720    7848 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0906 18:29:33.739747    7848 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0906 18:29:33.826870    7848 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19576-2243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0906 18:29:38.468107    7848 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0906 18:29:38.468220    7848 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19576-2243/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-582754 host does not exist
	  To start a cluster, run: "minikube start -p download-only-582754"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-582754
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestBinaryMirror (0.56s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-873447 --alsologtostderr --binary-mirror http://127.0.0.1:36941 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-873447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-873447
--- PASS: TestBinaryMirror (0.56s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-663433
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-663433: exit status 85 (84.784184ms)

                                                
                                                
-- stdout --
	* Profile "addons-663433" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663433"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.1s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-663433
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-663433: exit status 85 (97.264226ms)

                                                
                                                
-- stdout --
	* Profile "addons-663433" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-663433"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.10s)

                                                
                                    
x
+
TestAddons/Setup (220.89s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-663433 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-663433 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m40.893734896s)
--- PASS: TestAddons/Setup (220.89s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-663433 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-663433 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.043079ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-f8b5k" [26e1baf1-1b46-42bf-a4e7-b02a3ee2ca41] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003746846s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-mhbhh" [9fe29033-57ef-4d7d-917e-044ec00706a3] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003639765s
addons_test.go:342: (dbg) Run:  kubectl --context addons-663433 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-663433 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-663433 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.141239899s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 ip
2024/09/06 18:37:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.40s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-663433 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-663433 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-663433 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [749dd0ff-8d0c-4149-b744-2c61d33576e1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [749dd0ff-8d0c-4149-b744-2c61d33576e1] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.065107626s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-663433 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-663433 addons disable ingress-dns --alsologtostderr -v=1: (1.154950124s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-663433 addons disable ingress --alsologtostderr -v=1: (7.904300533s)
--- PASS: TestAddons/parallel/Ingress (19.06s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-c7f72" [b9684e75-d070-484a-88cc-d6beddb698b3] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00708801s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-663433
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-663433: (5.925243857s)
--- PASS: TestAddons/parallel/InspektorGadget (11.93s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.81s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 4.138791ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-8t4sb" [7f4455d9-98c4-4e73-a397-db01404c289b] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005082262s
addons_test.go:417: (dbg) Run:  kubectl --context addons-663433 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.81s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.305375ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-663433 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-663433 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [dd874aa1-fdcc-43ab-92fe-84fabad88d05] Pending
helpers_test.go:344: "task-pv-pod" [dd874aa1-fdcc-43ab-92fe-84fabad88d05] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [dd874aa1-fdcc-43ab-92fe-84fabad88d05] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003799339s
addons_test.go:590: (dbg) Run:  kubectl --context addons-663433 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-663433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-663433 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-663433 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-663433 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-663433 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-663433 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [838f498d-e5b7-419f-887c-43b0c55390b9] Pending
helpers_test.go:344: "task-pv-pod-restore" [838f498d-e5b7-419f-887c-43b0c55390b9] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [838f498d-e5b7-419f-887c-43b0c55390b9] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003747218s
addons_test.go:632: (dbg) Run:  kubectl --context addons-663433 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-663433 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-663433 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-663433 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.759904523s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-663433 addons disable volumesnapshots --alsologtostderr -v=1: (1.195721228s)
--- PASS: TestAddons/parallel/CSI (58.68s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-663433 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-9jlrv" [20e49ebb-b325-44d8-96c0-eae970ffb28c] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-9jlrv" [20e49ebb-b325-44d8-96c0-eae970ffb28c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-9jlrv" [20e49ebb-b325-44d8-96c0-eae970ffb28c] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.006046842s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-663433 addons disable headlamp --alsologtostderr -v=1: (5.81058538s)
--- PASS: TestAddons/parallel/Headlamp (15.81s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-snbcd" [2042c5dc-c8ed-45c6-b1ae-9c5dd9193fd6] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003449807s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-663433
--- PASS: TestAddons/parallel/CloudSpanner (6.61s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.61s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-663433 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-663433 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-663433 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [7c8c2379-f90f-4d3f-b1e1-ee914f05ad21] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [7c8c2379-f90f-4d3f-b1e1-ee914f05ad21] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [7c8c2379-f90f-4d3f-b1e1-ee914f05ad21] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.00367662s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-663433 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 ssh "cat /opt/local-path-provisioner/pvc-23b76a52-13ae-4446-a9d7-87ae664606cc_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-663433 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-663433 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-663433 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.476043628s)
--- PASS: TestAddons/parallel/LocalPath (51.61s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.5s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-2qpzm" [7d6ff840-2efe-49b7-a74d-10e6df066685] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004369823s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-663433
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.50s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-ggxs9" [f9b9421e-cc7f-4084-8dea-b94bc71d7935] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003383184s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-663433 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-663433 addons disable yakd --alsologtostderr -v=1: (5.743763849s)
--- PASS: TestAddons/parallel/Yakd (11.75s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.26s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-663433
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-663433: (12.014105694s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-663433
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-663433
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-663433
--- PASS: TestAddons/StoppedEnableDisable (12.26s)

                                                
                                    
x
+
TestCertOptions (36.19s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-592457 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-592457 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.456217232s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-592457 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-592457 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-592457 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-592457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-592457
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-592457: (2.039094112s)
--- PASS: TestCertOptions (36.19s)

                                                
                                    
x
+
TestCertExpiration (228.18s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-616593 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-616593 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (38.989139619s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-616593 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-616593 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.865828072s)
helpers_test.go:175: Cleaning up "cert-expiration-616593" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-616593
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-616593: (2.323980135s)
--- PASS: TestCertExpiration (228.18s)

                                                
                                    
x
+
TestForceSystemdFlag (36.77s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-587442 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-587442 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.090813703s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-587442 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-587442" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-587442
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-587442: (2.252802902s)
--- PASS: TestForceSystemdFlag (36.77s)

                                                
                                    
x
+
TestForceSystemdEnv (39.58s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-198545 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-198545 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (36.829455695s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-198545 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-198545" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-198545
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-198545: (2.278942874s)
--- PASS: TestForceSystemdEnv (39.58s)

                                                
                                    
x
+
TestDockerEnvContainerd (44.46s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-748041 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-748041 --driver=docker  --container-runtime=containerd: (28.876001265s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-748041"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWZBySHBx3gh/agent.27232" SSH_AGENT_PID="27233" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWZBySHBx3gh/agent.27232" SSH_AGENT_PID="27233" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWZBySHBx3gh/agent.27232" SSH_AGENT_PID="27233" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.106958853s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWZBySHBx3gh/agent.27232" SSH_AGENT_PID="27233" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
docker_test.go:250: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-PWZBySHBx3gh/agent.27232" SSH_AGENT_PID="27233" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls": (1.001280415s)
helpers_test.go:175: Cleaning up "dockerenv-748041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-748041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-748041: (1.918338482s)
--- PASS: TestDockerEnvContainerd (44.46s)

                                                
                                    
x
+
TestErrorSpam/setup (29.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-812526 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-812526 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-812526 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-812526 --driver=docker  --container-runtime=containerd: (29.543493366s)
--- PASS: TestErrorSpam/setup (29.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.83s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 pause
--- PASS: TestErrorSpam/pause (1.83s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.76s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 unpause
--- PASS: TestErrorSpam/unpause (1.76s)

                                                
                                    
x
+
TestErrorSpam/stop (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 stop: (1.247600486s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-812526 --log_dir /tmp/nospam-812526 stop
--- PASS: TestErrorSpam/stop (1.43s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19576-2243/.minikube/files/etc/test/nested/copy/7647/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.07s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-015911 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-015911 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (54.073492695s)
--- PASS: TestFunctional/serial/StartWithProxy (54.07s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.94s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-015911 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-015911 --alsologtostderr -v=8: (5.929398401s)
functional_test.go:663: soft start took 5.938730361s for "functional-015911" cluster.
--- PASS: TestFunctional/serial/SoftStart (5.94s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-015911 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 cache add registry.k8s.io/pause:3.1: (1.558362218s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 cache add registry.k8s.io/pause:3.3: (1.347113658s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 cache add registry.k8s.io/pause:latest: (1.280038655s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-015911 /tmp/TestFunctionalserialCacheCmdcacheadd_local3399663001/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cache add minikube-local-cache-test:functional-015911
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cache delete minikube-local-cache-test:functional-015911
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-015911
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (277.347681ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 cache reload: (1.035923134s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 kubectl -- --context functional-015911 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-015911 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (40.91s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-015911 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-015911 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (40.908734956s)
functional_test.go:761: restart took 40.908836782s for "functional-015911" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (40.91s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-015911 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 logs: (1.718837629s)
--- PASS: TestFunctional/serial/LogsCmd (1.72s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 logs --file /tmp/TestFunctionalserialLogsFileCmd2323234710/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 logs --file /tmp/TestFunctionalserialLogsFileCmd2323234710/001/logs.txt: (1.789591722s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.79s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.21s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-015911 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-015911
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-015911: exit status 115 (403.762834ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30088 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-015911 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 config get cpus: exit status 14 (70.951745ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 config get cpus: exit status 14 (88.645745ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-015911 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-015911 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 42008: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.65s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-015911 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-015911 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (206.426232ms)

                                                
                                                
-- stdout --
	* [functional-015911] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:42:59.418514   41647 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:42:59.418622   41647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:42:59.418627   41647 out.go:358] Setting ErrFile to fd 2...
	I0906 18:42:59.418632   41647 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:42:59.418884   41647 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 18:42:59.419255   41647 out.go:352] Setting JSON to false
	I0906 18:42:59.420118   41647 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1528,"bootTime":1725646652,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0906 18:42:59.420189   41647 start.go:139] virtualization:  
	I0906 18:42:59.422956   41647 out.go:177] * [functional-015911] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0906 18:42:59.425373   41647 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:42:59.425447   41647 notify.go:220] Checking for updates...
	I0906 18:42:59.430104   41647 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:42:59.432154   41647 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	I0906 18:42:59.434309   41647 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	I0906 18:42:59.436546   41647 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 18:42:59.438381   41647 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:42:59.441283   41647 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 18:42:59.441892   41647 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:42:59.486239   41647 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:42:59.486351   41647 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:42:59.557756   41647 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-06 18:42:59.548112381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:42:59.557875   41647 docker.go:318] overlay module found
	I0906 18:42:59.560703   41647 out.go:177] * Using the docker driver based on existing profile
	I0906 18:42:59.562747   41647 start.go:297] selected driver: docker
	I0906 18:42:59.562769   41647 start.go:901] validating driver "docker" against &{Name:functional-015911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-015911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:42:59.562900   41647 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:42:59.565703   41647 out.go:201] 
	W0906 18:42:59.567513   41647 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0906 18:42:59.569397   41647 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-015911 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-015911 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-015911 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (171.36031ms)

                                                
                                                
-- stdout --
	* [functional-015911] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:42:59.244563   41603 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:42:59.244757   41603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:42:59.244769   41603 out.go:358] Setting ErrFile to fd 2...
	I0906 18:42:59.244776   41603 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:42:59.245812   41603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 18:42:59.246277   41603 out.go:352] Setting JSON to false
	I0906 18:42:59.247247   41603 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1528,"bootTime":1725646652,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0906 18:42:59.247323   41603 start.go:139] virtualization:  
	I0906 18:42:59.249959   41603 out.go:177] * [functional-015911] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0906 18:42:59.252455   41603 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 18:42:59.252610   41603 notify.go:220] Checking for updates...
	I0906 18:42:59.255938   41603 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 18:42:59.257765   41603 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	I0906 18:42:59.259527   41603 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	I0906 18:42:59.261128   41603 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 18:42:59.262884   41603 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 18:42:59.264913   41603 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 18:42:59.265477   41603 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 18:42:59.296161   41603 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 18:42:59.296270   41603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:42:59.354012   41603 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-06 18:42:59.343914232 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:42:59.354130   41603 docker.go:318] overlay module found
	I0906 18:42:59.356049   41603 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0906 18:42:59.357565   41603 start.go:297] selected driver: docker
	I0906 18:42:59.357586   41603 start.go:901] validating driver "docker" against &{Name:functional-015911 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.45@sha256:81df288595202a317b1a4dc2506ca2e4ed5f22373c19a441b88cfbf4b9867c85 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-015911 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0906 18:42:59.357695   41603 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 18:42:59.360025   41603 out.go:201] 
	W0906 18:42:59.361562   41603 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0906 18:42:59.363024   41603 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-015911 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-015911 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-54l5f" [42e281d4-15fe-4de1-a351-4c0344f967d8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-54l5f" [42e281d4-15fe-4de1-a351-4c0344f967d8] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.005368408s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31410
functional_test.go:1675: http://192.168.49.2:31410: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-54l5f

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31410
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.70s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [9c52abd3-e41e-4d7a-ab50-ae7a616b4910] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004167006s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-015911 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-015911 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-015911 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-015911 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [3fef55fc-a9aa-4abd-aaa6-96849500e75f] Pending
helpers_test.go:344: "sp-pod" [3fef55fc-a9aa-4abd-aaa6-96849500e75f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [3fef55fc-a9aa-4abd-aaa6-96849500e75f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003674733s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-015911 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-015911 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-015911 delete -f testdata/storage-provisioner/pod.yaml: (1.017156402s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-015911 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cccc934c-42f3-485e-a002-dfe4d56984bf] Pending
helpers_test.go:344: "sp-pod" [cccc934c-42f3-485e-a002-dfe4d56984bf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004446856s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-015911 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.07s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh -n functional-015911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cp functional-015911:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd411215423/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh -n functional-015911 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh -n functional-015911 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/7647/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo cat /etc/test/nested/copy/7647/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/7647.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo cat /etc/ssl/certs/7647.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/7647.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo cat /usr/share/ca-certificates/7647.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/76472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo cat /etc/ssl/certs/76472.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/76472.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo cat /usr/share/ca-certificates/76472.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-015911 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo systemctl is-active docker"
2024/09/06 18:43:10 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 ssh "sudo systemctl is-active docker": exit status 1 (335.415324ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 ssh "sudo systemctl is-active crio": exit status 1 (280.127097ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-015911 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-015911 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-015911 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 39173: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-015911 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-015911 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-015911 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c4cce750-cbfb-45e8-9a42-21524e7d5851] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c4cce750-cbfb-45e8-9a42-21524e7d5851] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004092335s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-015911 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.26.65 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-015911 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-015911 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-015911 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-sp8p5" [689faf8e-b49c-4d0f-bcd2-2fc34dbc6bfa] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-sp8p5" [689faf8e-b49c-4d0f-bcd2-2fc34dbc6bfa] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003786416s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "383.555425ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "69.136958ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 service list -o json
functional_test.go:1494: Took "580.566553ms" to run "out/minikube-linux-arm64 -p functional-015911 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "352.599277ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "94.35181ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30911
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdany-port2810761245/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1725648176752732969" to /tmp/TestFunctionalparallelMountCmdany-port2810761245/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1725648176752732969" to /tmp/TestFunctionalparallelMountCmdany-port2810761245/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1725648176752732969" to /tmp/TestFunctionalparallelMountCmdany-port2810761245/001/test-1725648176752732969
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (471.923482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep  6 18:42 created-by-test
-rw-r--r-- 1 docker docker 24 Sep  6 18:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep  6 18:42 test-1725648176752732969
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh cat /mount-9p/test-1725648176752732969
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-015911 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [4ba319a3-5daa-4593-9a35-bdedc2cd8d60] Pending
helpers_test.go:344: "busybox-mount" [4ba319a3-5daa-4593-9a35-bdedc2cd8d60] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [4ba319a3-5daa-4593-9a35-bdedc2cd8d60] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [4ba319a3-5daa-4593-9a35-bdedc2cd8d60] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003603703s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-015911 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdany-port2810761245/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30911
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdspecific-port3988238917/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (513.694464ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdspecific-port3988238917/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 ssh "sudo umount -f /mount-9p": exit status 1 (340.487423ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-015911 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdspecific-port3988238917/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1488291445/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1488291445/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1488291445/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T" /mount1: exit status 1 (927.665042ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-015911 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1488291445/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1488291445/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-015911 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1488291445/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 version -o=json --components: (1.334258409s)
--- PASS: TestFunctional/parallel/Version/components (1.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-015911 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-015911
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-015911
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-015911 image ls --format short --alsologtostderr:
I0906 18:43:17.659387   44655 out.go:345] Setting OutFile to fd 1 ...
I0906 18:43:17.659530   44655 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:17.659541   44655 out.go:358] Setting ErrFile to fd 2...
I0906 18:43:17.659546   44655 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:17.659913   44655 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
I0906 18:43:17.661028   44655 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:17.661191   44655 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:17.661926   44655 cli_runner.go:164] Run: docker container inspect functional-015911 --format={{.State.Status}}
I0906 18:43:17.690710   44655 ssh_runner.go:195] Run: systemctl --version
I0906 18:43:17.690821   44655 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-015911
I0906 18:43:17.712327   44655 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/functional-015911/id_rsa Username:docker}
I0906 18:43:17.800650   44655 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-015911 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| docker.io/kicbase/echo-server               | functional-015911  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-015911  | sha256:a7f089 | 990B   |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-015911 image ls --format table --alsologtostderr:
I0906 18:43:17.973197   44725 out.go:345] Setting OutFile to fd 1 ...
I0906 18:43:17.973424   44725 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:17.973451   44725 out.go:358] Setting ErrFile to fd 2...
I0906 18:43:17.973470   44725 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:17.973765   44725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
I0906 18:43:17.974419   44725 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:17.974606   44725 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:17.975138   44725 cli_runner.go:164] Run: docker container inspect functional-015911 --format={{.State.Status}}
I0906 18:43:18.003040   44725 ssh_runner.go:195] Run: systemctl --version
I0906 18:43:18.003094   44725 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-015911
I0906 18:43:18.031899   44725 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/functional-015911/id_rsa Username:docker}
I0906 18:43:18.125852   44725 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-015911 image ls --format json --alsologtostderr:
[{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:a7f0891095482d5d1de486303f1f1f9835506a6be386ea795ef54ac6b2343bde","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-015911"],"size":"990"},{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04
c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806"
,"repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"s
ha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f
0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-015911"],"size":"2173567"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-015911 image ls --format json --alsologtostderr:
I0906 18:43:17.926891   44719 out.go:345] Setting OutFile to fd 1 ...
I0906 18:43:17.927277   44719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:17.927293   44719 out.go:358] Setting ErrFile to fd 2...
I0906 18:43:17.927300   44719 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:17.928748   44719 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
I0906 18:43:17.929423   44719 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:17.929548   44719 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:17.930062   44719 cli_runner.go:164] Run: docker container inspect functional-015911 --format={{.State.Status}}
I0906 18:43:17.956482   44719 ssh_runner.go:195] Run: systemctl --version
I0906 18:43:17.956540   44719 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-015911
I0906 18:43:17.983527   44719 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/functional-015911/id_rsa Username:docker}
I0906 18:43:18.093817   44719 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-015911 image ls --format yaml --alsologtostderr:
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:a7f0891095482d5d1de486303f1f1f9835506a6be386ea795ef54ac6b2343bde
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-015911
size: "990"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-015911
size: "2173567"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-015911 image ls --format yaml --alsologtostderr:
I0906 18:43:17.673153   44656 out.go:345] Setting OutFile to fd 1 ...
I0906 18:43:17.673353   44656 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:17.673365   44656 out.go:358] Setting ErrFile to fd 2...
I0906 18:43:17.673371   44656 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:17.673643   44656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
I0906 18:43:17.674282   44656 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:17.674442   44656 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:17.674946   44656 cli_runner.go:164] Run: docker container inspect functional-015911 --format={{.State.Status}}
I0906 18:43:17.697344   44656 ssh_runner.go:195] Run: systemctl --version
I0906 18:43:17.697410   44656 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-015911
I0906 18:43:17.731169   44656 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/functional-015911/id_rsa Username:docker}
I0906 18:43:17.821009   44656 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-015911 ssh pgrep buildkitd: exit status 1 (263.587142ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image build -t localhost/my-image:functional-015911 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 image build -t localhost/my-image:functional-015911 testdata/build --alsologtostderr: (3.126710737s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-015911 image build -t localhost/my-image:functional-015911 testdata/build --alsologtostderr:
I0906 18:43:18.458591   44842 out.go:345] Setting OutFile to fd 1 ...
I0906 18:43:18.458821   44842 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:18.458949   44842 out.go:358] Setting ErrFile to fd 2...
I0906 18:43:18.458972   44842 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0906 18:43:18.459477   44842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
I0906 18:43:18.460135   44842 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:18.460821   44842 config.go:182] Loaded profile config "functional-015911": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0906 18:43:18.461324   44842 cli_runner.go:164] Run: docker container inspect functional-015911 --format={{.State.Status}}
I0906 18:43:18.479360   44842 ssh_runner.go:195] Run: systemctl --version
I0906 18:43:18.479416   44842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-015911
I0906 18:43:18.496701   44842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/functional-015911/id_rsa Username:docker}
I0906 18:43:18.584728   44842 build_images.go:161] Building image from path: /tmp/build.231605222.tar
I0906 18:43:18.584800   44842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0906 18:43:18.593990   44842 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.231605222.tar
I0906 18:43:18.597783   44842 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.231605222.tar: stat -c "%s %y" /var/lib/minikube/build/build.231605222.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.231605222.tar': No such file or directory
I0906 18:43:18.597869   44842 ssh_runner.go:362] scp /tmp/build.231605222.tar --> /var/lib/minikube/build/build.231605222.tar (3072 bytes)
I0906 18:43:18.623337   44842 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.231605222
I0906 18:43:18.632046   44842 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.231605222 -xf /var/lib/minikube/build/build.231605222.tar
I0906 18:43:18.641329   44842 containerd.go:394] Building image: /var/lib/minikube/build/build.231605222
I0906 18:43:18.641405   44842 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.231605222 --local dockerfile=/var/lib/minikube/build/build.231605222 --output type=image,name=localhost/my-image:functional-015911
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 155.65kB / 828.50kB 0.5s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.5s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:ae6b682c416d0f665820b0ee14031696ca3b6089b5f13b631d1baa43b38bc2b6
#8 exporting manifest sha256:ae6b682c416d0f665820b0ee14031696ca3b6089b5f13b631d1baa43b38bc2b6 0.0s done
#8 exporting config sha256:2918bd764dd02976b689c9b5b6155ac10797741ce92a3c28ce1eba7741a0aeb9 0.0s done
#8 naming to localhost/my-image:functional-015911 done
#8 DONE 0.2s
I0906 18:43:21.515413   44842 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.231605222 --local dockerfile=/var/lib/minikube/build/build.231605222 --output type=image,name=localhost/my-image:functional-015911: (2.873981469s)
I0906 18:43:21.515490   44842 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.231605222
I0906 18:43:21.526219   44842 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.231605222.tar
I0906 18:43:21.535858   44842 build_images.go:217] Built localhost/my-image:functional-015911 from /tmp/build.231605222.tar
I0906 18:43:21.535898   44842 build_images.go:133] succeeded building to: functional-015911
I0906 18:43:21.535906   44842 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-015911
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image load --daemon kicbase/echo-server:functional-015911 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 image load --daemon kicbase/echo-server:functional-015911 --alsologtostderr: (1.189484062s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image load --daemon kicbase/echo-server:functional-015911 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 image load --daemon kicbase/echo-server:functional-015911 --alsologtostderr: (1.062658327s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-015911
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image load --daemon kicbase/echo-server:functional-015911 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-015911 image load --daemon kicbase/echo-server:functional-015911 --alsologtostderr: (1.00960564s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image save kicbase/echo-server:functional-015911 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image rm kicbase/echo-server:functional-015911 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-015911
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-015911 image save --daemon kicbase/echo-server:functional-015911 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-015911
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-015911
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-015911
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-015911
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (121.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-292867 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0906 18:43:25.554145    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:43:28.115760    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:43:33.237028    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:43:43.479192    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:44:03.961424    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:44:44.922942    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-292867 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m0.626542465s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (121.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (33.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-292867 -- rollout status deployment/busybox: (30.43192354s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-qmrx2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-tqx4q -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-xlxch -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-qmrx2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-tqx4q -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-xlxch -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-qmrx2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-tqx4q -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-xlxch -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (33.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-qmrx2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-qmrx2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-tqx4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-tqx4q -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-xlxch -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-292867 -- exec busybox-7dff88458-xlxch -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-292867 -v=7 --alsologtostderr
E0906 18:46:06.845871    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-292867 -v=7 --alsologtostderr: (20.793698175s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-292867 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-292867 status --output json -v=7 --alsologtostderr: (1.017562794s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp testdata/cp-test.txt ha-292867:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3560130288/001/cp-test_ha-292867.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867:/home/docker/cp-test.txt ha-292867-m02:/home/docker/cp-test_ha-292867_ha-292867-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m02 "sudo cat /home/docker/cp-test_ha-292867_ha-292867-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867:/home/docker/cp-test.txt ha-292867-m03:/home/docker/cp-test_ha-292867_ha-292867-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m03 "sudo cat /home/docker/cp-test_ha-292867_ha-292867-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867:/home/docker/cp-test.txt ha-292867-m04:/home/docker/cp-test_ha-292867_ha-292867-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m04 "sudo cat /home/docker/cp-test_ha-292867_ha-292867-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp testdata/cp-test.txt ha-292867-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3560130288/001/cp-test_ha-292867-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m02:/home/docker/cp-test.txt ha-292867:/home/docker/cp-test_ha-292867-m02_ha-292867.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867 "sudo cat /home/docker/cp-test_ha-292867-m02_ha-292867.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m02:/home/docker/cp-test.txt ha-292867-m03:/home/docker/cp-test_ha-292867-m02_ha-292867-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m03 "sudo cat /home/docker/cp-test_ha-292867-m02_ha-292867-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m02:/home/docker/cp-test.txt ha-292867-m04:/home/docker/cp-test_ha-292867-m02_ha-292867-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m04 "sudo cat /home/docker/cp-test_ha-292867-m02_ha-292867-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp testdata/cp-test.txt ha-292867-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3560130288/001/cp-test_ha-292867-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m03:/home/docker/cp-test.txt ha-292867:/home/docker/cp-test_ha-292867-m03_ha-292867.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867 "sudo cat /home/docker/cp-test_ha-292867-m03_ha-292867.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m03:/home/docker/cp-test.txt ha-292867-m02:/home/docker/cp-test_ha-292867-m03_ha-292867-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m02 "sudo cat /home/docker/cp-test_ha-292867-m03_ha-292867-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m03:/home/docker/cp-test.txt ha-292867-m04:/home/docker/cp-test_ha-292867-m03_ha-292867-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m04 "sudo cat /home/docker/cp-test_ha-292867-m03_ha-292867-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp testdata/cp-test.txt ha-292867-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3560130288/001/cp-test_ha-292867-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m04:/home/docker/cp-test.txt ha-292867:/home/docker/cp-test_ha-292867-m04_ha-292867.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867 "sudo cat /home/docker/cp-test_ha-292867-m04_ha-292867.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m04:/home/docker/cp-test.txt ha-292867-m02:/home/docker/cp-test_ha-292867-m04_ha-292867-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m02 "sudo cat /home/docker/cp-test_ha-292867-m04_ha-292867-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 cp ha-292867-m04:/home/docker/cp-test.txt ha-292867-m03:/home/docker/cp-test_ha-292867-m04_ha-292867-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 ssh -n ha-292867-m03 "sudo cat /home/docker/cp-test_ha-292867-m04_ha-292867-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-292867 node stop m02 -v=7 --alsologtostderr: (12.150193746s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr: exit status 7 (726.602739ms)

                                                
                                                
-- stdout --
	ha-292867
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-292867-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-292867-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-292867-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:46:55.218570   61105 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:46:55.218726   61105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:46:55.218738   61105 out.go:358] Setting ErrFile to fd 2...
	I0906 18:46:55.218744   61105 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:46:55.219011   61105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 18:46:55.219226   61105 out.go:352] Setting JSON to false
	I0906 18:46:55.219264   61105 mustload.go:65] Loading cluster: ha-292867
	I0906 18:46:55.219439   61105 notify.go:220] Checking for updates...
	I0906 18:46:55.219683   61105 config.go:182] Loaded profile config "ha-292867": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 18:46:55.219702   61105 status.go:255] checking status of ha-292867 ...
	I0906 18:46:55.220302   61105 cli_runner.go:164] Run: docker container inspect ha-292867 --format={{.State.Status}}
	I0906 18:46:55.239225   61105 status.go:330] ha-292867 host status = "Running" (err=<nil>)
	I0906 18:46:55.239251   61105 host.go:66] Checking if "ha-292867" exists ...
	I0906 18:46:55.239551   61105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-292867
	I0906 18:46:55.265643   61105 host.go:66] Checking if "ha-292867" exists ...
	I0906 18:46:55.265952   61105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:46:55.266005   61105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-292867
	I0906 18:46:55.294320   61105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/ha-292867/id_rsa Username:docker}
	I0906 18:46:55.381543   61105 ssh_runner.go:195] Run: systemctl --version
	I0906 18:46:55.385858   61105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:46:55.397709   61105 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 18:46:55.471781   61105 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:48 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-06 18:46:55.461711617 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 18:46:55.472357   61105 kubeconfig.go:125] found "ha-292867" server: "https://192.168.49.254:8443"
	I0906 18:46:55.472389   61105 api_server.go:166] Checking apiserver status ...
	I0906 18:46:55.472482   61105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:46:55.485361   61105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1465/cgroup
	I0906 18:46:55.494854   61105 api_server.go:182] apiserver freezer: "7:freezer:/docker/9aad3e663c034454fb8683eff693492d31431602e6cd5479d5342153523143dd/kubepods/burstable/pod7153e5c719d065a2e19f7047a072bc1c/63b482ba3d7ca93c0a4109a1c00b7a4c8469995e6a2cd7c163fd737964bc97bc"
	I0906 18:46:55.494930   61105 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9aad3e663c034454fb8683eff693492d31431602e6cd5479d5342153523143dd/kubepods/burstable/pod7153e5c719d065a2e19f7047a072bc1c/63b482ba3d7ca93c0a4109a1c00b7a4c8469995e6a2cd7c163fd737964bc97bc/freezer.state
	I0906 18:46:55.505443   61105 api_server.go:204] freezer state: "THAWED"
	I0906 18:46:55.505478   61105 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0906 18:46:55.513222   61105 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0906 18:46:55.513251   61105 status.go:422] ha-292867 apiserver status = Running (err=<nil>)
	I0906 18:46:55.513263   61105 status.go:257] ha-292867 status: &{Name:ha-292867 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:46:55.513280   61105 status.go:255] checking status of ha-292867-m02 ...
	I0906 18:46:55.513600   61105 cli_runner.go:164] Run: docker container inspect ha-292867-m02 --format={{.State.Status}}
	I0906 18:46:55.530984   61105 status.go:330] ha-292867-m02 host status = "Stopped" (err=<nil>)
	I0906 18:46:55.531008   61105 status.go:343] host is not running, skipping remaining checks
	I0906 18:46:55.531029   61105 status.go:257] ha-292867-m02 status: &{Name:ha-292867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:46:55.531048   61105 status.go:255] checking status of ha-292867-m03 ...
	I0906 18:46:55.531389   61105 cli_runner.go:164] Run: docker container inspect ha-292867-m03 --format={{.State.Status}}
	I0906 18:46:55.547931   61105 status.go:330] ha-292867-m03 host status = "Running" (err=<nil>)
	I0906 18:46:55.547986   61105 host.go:66] Checking if "ha-292867-m03" exists ...
	I0906 18:46:55.548307   61105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-292867-m03
	I0906 18:46:55.565626   61105 host.go:66] Checking if "ha-292867-m03" exists ...
	I0906 18:46:55.565967   61105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:46:55.566014   61105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-292867-m03
	I0906 18:46:55.583614   61105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/ha-292867-m03/id_rsa Username:docker}
	I0906 18:46:55.669792   61105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:46:55.681857   61105 kubeconfig.go:125] found "ha-292867" server: "https://192.168.49.254:8443"
	I0906 18:46:55.681884   61105 api_server.go:166] Checking apiserver status ...
	I0906 18:46:55.681929   61105 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 18:46:55.692650   61105 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1400/cgroup
	I0906 18:46:55.702037   61105 api_server.go:182] apiserver freezer: "7:freezer:/docker/7e1aa3300c45021c24582c6724b9995958203bde5fa7c7d44f81b2bb64cd2e13/kubepods/burstable/pode7fa749bbfbe9592364ca10bddb13fbd/2754a597cfb230b5c6a43c6411e0248f22691eb279d9215fe1a9139033577cac"
	I0906 18:46:55.702139   61105 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7e1aa3300c45021c24582c6724b9995958203bde5fa7c7d44f81b2bb64cd2e13/kubepods/burstable/pode7fa749bbfbe9592364ca10bddb13fbd/2754a597cfb230b5c6a43c6411e0248f22691eb279d9215fe1a9139033577cac/freezer.state
	I0906 18:46:55.713858   61105 api_server.go:204] freezer state: "THAWED"
	I0906 18:46:55.713892   61105 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0906 18:46:55.721750   61105 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0906 18:46:55.721778   61105 status.go:422] ha-292867-m03 apiserver status = Running (err=<nil>)
	I0906 18:46:55.721788   61105 status.go:257] ha-292867-m03 status: &{Name:ha-292867-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:46:55.721805   61105 status.go:255] checking status of ha-292867-m04 ...
	I0906 18:46:55.722129   61105 cli_runner.go:164] Run: docker container inspect ha-292867-m04 --format={{.State.Status}}
	I0906 18:46:55.738004   61105 status.go:330] ha-292867-m04 host status = "Running" (err=<nil>)
	I0906 18:46:55.738030   61105 host.go:66] Checking if "ha-292867-m04" exists ...
	I0906 18:46:55.738374   61105 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-292867-m04
	I0906 18:46:55.754766   61105 host.go:66] Checking if "ha-292867-m04" exists ...
	I0906 18:46:55.755086   61105 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 18:46:55.755131   61105 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-292867-m04
	I0906 18:46:55.781804   61105 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/ha-292867-m04/id_rsa Username:docker}
	I0906 18:46:55.869566   61105 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 18:46:55.881686   61105 status.go:257] ha-292867-m04 status: &{Name:ha-292867-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (17.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-292867 node start m02 -v=7 --alsologtostderr: (16.822188849s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr: (1.039673654s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (17.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-292867 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-292867 -v=7 --alsologtostderr
E0906 18:47:29.638326    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:29.644805    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:29.656237    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:29.677703    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:29.719160    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:29.800700    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:29.962050    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:30.283967    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:30.925401    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:32.207056    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:34.768997    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:39.890351    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:47:50.132338    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-292867 -v=7 --alsologtostderr: (37.565558223s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-292867 --wait=true -v=7 --alsologtostderr
E0906 18:48:10.613747    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:22.974316    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:50.687598    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:48:51.575190    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-292867 --wait=true -v=7 --alsologtostderr: (1m46.137821784s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-292867
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (143.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-292867 node delete m03 -v=7 --alsologtostderr: (10.179085761s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 stop -v=7 --alsologtostderr
E0906 18:50:13.496999    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-292867 stop -v=7 --alsologtostderr: (35.930549371s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr: exit status 7 (112.469657ms)

                                                
                                                
-- stdout --
	ha-292867
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-292867-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-292867-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 18:50:26.723637   75399 out.go:345] Setting OutFile to fd 1 ...
	I0906 18:50:26.723795   75399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:26.723805   75399 out.go:358] Setting ErrFile to fd 2...
	I0906 18:50:26.723812   75399 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 18:50:26.724131   75399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 18:50:26.724363   75399 out.go:352] Setting JSON to false
	I0906 18:50:26.724413   75399 mustload.go:65] Loading cluster: ha-292867
	I0906 18:50:26.725077   75399 notify.go:220] Checking for updates...
	I0906 18:50:26.725215   75399 config.go:182] Loaded profile config "ha-292867": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 18:50:26.725239   75399 status.go:255] checking status of ha-292867 ...
	I0906 18:50:26.725734   75399 cli_runner.go:164] Run: docker container inspect ha-292867 --format={{.State.Status}}
	I0906 18:50:26.745115   75399 status.go:330] ha-292867 host status = "Stopped" (err=<nil>)
	I0906 18:50:26.745137   75399 status.go:343] host is not running, skipping remaining checks
	I0906 18:50:26.745144   75399 status.go:257] ha-292867 status: &{Name:ha-292867 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:50:26.745172   75399 status.go:255] checking status of ha-292867-m02 ...
	I0906 18:50:26.745491   75399 cli_runner.go:164] Run: docker container inspect ha-292867-m02 --format={{.State.Status}}
	I0906 18:50:26.769466   75399 status.go:330] ha-292867-m02 host status = "Stopped" (err=<nil>)
	I0906 18:50:26.769489   75399 status.go:343] host is not running, skipping remaining checks
	I0906 18:50:26.769495   75399 status.go:257] ha-292867-m02 status: &{Name:ha-292867-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 18:50:26.769513   75399 status.go:255] checking status of ha-292867-m04 ...
	I0906 18:50:26.769798   75399 cli_runner.go:164] Run: docker container inspect ha-292867-m04 --format={{.State.Status}}
	I0906 18:50:26.787549   75399 status.go:330] ha-292867-m04 host status = "Stopped" (err=<nil>)
	I0906 18:50:26.787572   75399 status.go:343] host is not running, skipping remaining checks
	I0906 18:50:26.787579   75399 status.go:257] ha-292867-m04 status: &{Name:ha-292867-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (78.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-292867 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-292867 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m17.345335341s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (78.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (38.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-292867 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-292867 --control-plane -v=7 --alsologtostderr: (37.739274778s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-292867 status -v=7 --alsologtostderr: (1.007241135s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (38.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.79s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.52s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-434997 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0906 18:52:57.340547    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 18:53:22.974036    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-434997 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.519588194s)
--- PASS: TestJSONOutput/start/Command (50.52s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.77s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-434997 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.77s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-434997 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.78s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-434997 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-434997 --output=json --user=testUser: (5.784791369s)
--- PASS: TestJSONOutput/stop/Command (5.78s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-466141 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-466141 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (71.362943ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6dd6368e-7346-4410-ab4d-11ec07c9d005","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-466141] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"337b14ba-c3a7-4b1c-8f00-bda70d136f29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"3f14a529-2d7d-46f8-bce7-08de51260511","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"10558d16-3ca2-4c3a-9f25-e503e890b2a1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig"}}
	{"specversion":"1.0","id":"1ce828b6-a148-486a-851d-e4dd65ad660b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube"}}
	{"specversion":"1.0","id":"2eb54584-0f2b-4758-9dad-fac78b48b51f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"fd33ba62-5aaf-4ed8-803f-09614141fc10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a92415df-511d-4f3d-a324-894d9762b8f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-466141" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-466141
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-009644 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-009644 --network=: (37.350578377s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-009644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-009644
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-009644: (2.131965573s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.50s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (30.24s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-994670 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-994670 --network=bridge: (28.299322839s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-994670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-994670
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-994670: (1.914920165s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (30.24s)

                                                
                                    
x
+
TestKicExistingNetwork (34.05s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-256418 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-256418 --network=existing-network: (32.006181847s)
helpers_test.go:175: Cleaning up "existing-network-256418" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-256418
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-256418: (1.896750775s)
--- PASS: TestKicExistingNetwork (34.05s)

                                                
                                    
x
+
TestKicCustomSubnet (34.4s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-104045 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-104045 --subnet=192.168.60.0/24: (32.315363712s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-104045 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-104045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-104045
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-104045: (2.061640022s)
--- PASS: TestKicCustomSubnet (34.40s)

                                                
                                    
x
+
TestKicStaticIP (32.21s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-621562 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-621562 --static-ip=192.168.200.200: (29.933279457s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-621562 ip
helpers_test.go:175: Cleaning up "static-ip-621562" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-621562
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-621562: (2.133293499s)
--- PASS: TestKicStaticIP (32.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.76s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-380165 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-380165 --driver=docker  --container-runtime=containerd: (33.084495345s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-383013 --driver=docker  --container-runtime=containerd
E0906 18:57:29.638952    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-383013 --driver=docker  --container-runtime=containerd: (33.016476819s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-380165
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-383013
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-383013" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-383013
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-383013: (2.039257833s)
helpers_test.go:175: Cleaning up "first-380165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-380165
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-380165: (2.239249059s)
--- PASS: TestMinikubeProfile (71.76s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-774461 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-774461 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.775580419s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-774461 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.27s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-788385 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-788385 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.270719595s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.27s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-788385 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.32s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-774461 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-774461 --alsologtostderr -v=5: (1.628975437s)
--- PASS: TestMountStart/serial/DeleteFirst (1.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-788385 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-788385
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-788385: (1.203630448s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-788385
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-788385: (7.335627637s)
--- PASS: TestMountStart/serial/RestartStopped (8.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-788385 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-101803 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0906 18:58:22.974238    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-101803 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m8.227220091s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.75s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (14.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-101803 -- rollout status deployment/busybox: (12.833428789s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-dg9ck -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-hbxrx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-dg9ck -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-hbxrx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-dg9ck -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-hbxrx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (14.65s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-dg9ck -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-dg9ck -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-hbxrx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-101803 -- exec busybox-7dff88458-hbxrx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-101803 -v 3 --alsologtostderr
E0906 18:59:46.058917    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-101803 -v 3 --alsologtostderr: (15.816754247s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-101803 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp testdata/cp-test.txt multinode-101803:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3372239729/001/cp-test_multinode-101803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803:/home/docker/cp-test.txt multinode-101803-m02:/home/docker/cp-test_multinode-101803_multinode-101803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m02 "sudo cat /home/docker/cp-test_multinode-101803_multinode-101803-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803:/home/docker/cp-test.txt multinode-101803-m03:/home/docker/cp-test_multinode-101803_multinode-101803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m03 "sudo cat /home/docker/cp-test_multinode-101803_multinode-101803-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp testdata/cp-test.txt multinode-101803-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3372239729/001/cp-test_multinode-101803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803-m02:/home/docker/cp-test.txt multinode-101803:/home/docker/cp-test_multinode-101803-m02_multinode-101803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803 "sudo cat /home/docker/cp-test_multinode-101803-m02_multinode-101803.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803-m02:/home/docker/cp-test.txt multinode-101803-m03:/home/docker/cp-test_multinode-101803-m02_multinode-101803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m03 "sudo cat /home/docker/cp-test_multinode-101803-m02_multinode-101803-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp testdata/cp-test.txt multinode-101803-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3372239729/001/cp-test_multinode-101803-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803-m03:/home/docker/cp-test.txt multinode-101803:/home/docker/cp-test_multinode-101803-m03_multinode-101803.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803 "sudo cat /home/docker/cp-test_multinode-101803-m03_multinode-101803.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 cp multinode-101803-m03:/home/docker/cp-test.txt multinode-101803-m02:/home/docker/cp-test_multinode-101803-m03_multinode-101803-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 ssh -n multinode-101803-m02 "sudo cat /home/docker/cp-test_multinode-101803-m03_multinode-101803-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.75s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-101803 node stop m03: (1.581303529s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-101803 status: exit status 7 (708.399248ms)

                                                
                                                
-- stdout --
	multinode-101803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-101803-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-101803-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-101803 status --alsologtostderr: exit status 7 (549.069153ms)

                                                
                                                
-- stdout --
	multinode-101803
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-101803-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-101803-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:00:02.214950  128801 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:00:02.215192  128801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:02.215220  128801 out.go:358] Setting ErrFile to fd 2...
	I0906 19:00:02.215239  128801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:00:02.215525  128801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 19:00:02.215805  128801 out.go:352] Setting JSON to false
	I0906 19:00:02.215874  128801 mustload.go:65] Loading cluster: multinode-101803
	I0906 19:00:02.215977  128801 notify.go:220] Checking for updates...
	I0906 19:00:02.216398  128801 config.go:182] Loaded profile config "multinode-101803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 19:00:02.216474  128801 status.go:255] checking status of multinode-101803 ...
	I0906 19:00:02.217324  128801 cli_runner.go:164] Run: docker container inspect multinode-101803 --format={{.State.Status}}
	I0906 19:00:02.239899  128801 status.go:330] multinode-101803 host status = "Running" (err=<nil>)
	I0906 19:00:02.239924  128801 host.go:66] Checking if "multinode-101803" exists ...
	I0906 19:00:02.240247  128801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-101803
	I0906 19:00:02.272554  128801 host.go:66] Checking if "multinode-101803" exists ...
	I0906 19:00:02.272876  128801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:00:02.272933  128801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-101803
	I0906 19:00:02.292134  128801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/multinode-101803/id_rsa Username:docker}
	I0906 19:00:02.385983  128801 ssh_runner.go:195] Run: systemctl --version
	I0906 19:00:02.391060  128801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:00:02.404652  128801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:00:02.464552  128801 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-06 19:00:02.452516874 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 19:00:02.465591  128801 kubeconfig.go:125] found "multinode-101803" server: "https://192.168.67.2:8443"
	I0906 19:00:02.465626  128801 api_server.go:166] Checking apiserver status ...
	I0906 19:00:02.465688  128801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0906 19:00:02.480905  128801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1458/cgroup
	I0906 19:00:02.491868  128801 api_server.go:182] apiserver freezer: "7:freezer:/docker/4d5397a3569008fc0b1ec0de070c799e1982ca843a827b70c5025ed75add327b/kubepods/burstable/pod534b0aaa172aa3f28f550e612ff3a914/41c1ff620f300580a8cf86ff716a8abf430008eb9699c1c67ecdd400808aaab4"
	I0906 19:00:02.491958  128801 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4d5397a3569008fc0b1ec0de070c799e1982ca843a827b70c5025ed75add327b/kubepods/burstable/pod534b0aaa172aa3f28f550e612ff3a914/41c1ff620f300580a8cf86ff716a8abf430008eb9699c1c67ecdd400808aaab4/freezer.state
	I0906 19:00:02.504353  128801 api_server.go:204] freezer state: "THAWED"
	I0906 19:00:02.504398  128801 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0906 19:00:02.512395  128801 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0906 19:00:02.512434  128801 status.go:422] multinode-101803 apiserver status = Running (err=<nil>)
	I0906 19:00:02.512447  128801 status.go:257] multinode-101803 status: &{Name:multinode-101803 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:00:02.512465  128801 status.go:255] checking status of multinode-101803-m02 ...
	I0906 19:00:02.512795  128801 cli_runner.go:164] Run: docker container inspect multinode-101803-m02 --format={{.State.Status}}
	I0906 19:00:02.531311  128801 status.go:330] multinode-101803-m02 host status = "Running" (err=<nil>)
	I0906 19:00:02.531339  128801 host.go:66] Checking if "multinode-101803-m02" exists ...
	I0906 19:00:02.531662  128801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-101803-m02
	I0906 19:00:02.563944  128801 host.go:66] Checking if "multinode-101803-m02" exists ...
	I0906 19:00:02.564275  128801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0906 19:00:02.564328  128801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-101803-m02
	I0906 19:00:02.583312  128801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/19576-2243/.minikube/machines/multinode-101803-m02/id_rsa Username:docker}
	I0906 19:00:02.674085  128801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0906 19:00:02.688259  128801 status.go:257] multinode-101803-m02 status: &{Name:multinode-101803-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:00:02.688292  128801 status.go:255] checking status of multinode-101803-m03 ...
	I0906 19:00:02.688658  128801 cli_runner.go:164] Run: docker container inspect multinode-101803-m03 --format={{.State.Status}}
	I0906 19:00:02.707568  128801 status.go:330] multinode-101803-m03 host status = "Stopped" (err=<nil>)
	I0906 19:00:02.707593  128801 status.go:343] host is not running, skipping remaining checks
	I0906 19:00:02.707602  128801 status.go:257] multinode-101803-m03 status: &{Name:multinode-101803-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.84s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-101803 node start m03 -v=7 --alsologtostderr: (9.005956302s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.77s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-101803
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-101803
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-101803: (25.075461653s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-101803 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-101803 --wait=true -v=8 --alsologtostderr: (1m3.828928965s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-101803
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-101803 node delete m03: (4.776292628s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-101803 stop: (24.212060183s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-101803 status: exit status 7 (92.928685ms)

                                                
                                                
-- stdout --
	multinode-101803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-101803-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-101803 status --alsologtostderr: exit status 7 (86.434717ms)

                                                
                                                
-- stdout --
	multinode-101803
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-101803-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:02:11.277789  137245 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:02:11.277914  137245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:02:11.277926  137245 out.go:358] Setting ErrFile to fd 2...
	I0906 19:02:11.277931  137245 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:02:11.278163  137245 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 19:02:11.278335  137245 out.go:352] Setting JSON to false
	I0906 19:02:11.278377  137245 mustload.go:65] Loading cluster: multinode-101803
	I0906 19:02:11.278487  137245 notify.go:220] Checking for updates...
	I0906 19:02:11.278780  137245 config.go:182] Loaded profile config "multinode-101803": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 19:02:11.278799  137245 status.go:255] checking status of multinode-101803 ...
	I0906 19:02:11.280311  137245 cli_runner.go:164] Run: docker container inspect multinode-101803 --format={{.State.Status}}
	I0906 19:02:11.297499  137245 status.go:330] multinode-101803 host status = "Stopped" (err=<nil>)
	I0906 19:02:11.297525  137245 status.go:343] host is not running, skipping remaining checks
	I0906 19:02:11.297532  137245 status.go:257] multinode-101803 status: &{Name:multinode-101803 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0906 19:02:11.297563  137245 status.go:255] checking status of multinode-101803-m02 ...
	I0906 19:02:11.297874  137245 cli_runner.go:164] Run: docker container inspect multinode-101803-m02 --format={{.State.Status}}
	I0906 19:02:11.319248  137245 status.go:330] multinode-101803-m02 host status = "Stopped" (err=<nil>)
	I0906 19:02:11.319281  137245 status.go:343] host is not running, skipping remaining checks
	I0906 19:02:11.319288  137245 status.go:257] multinode-101803-m02 status: &{Name:multinode-101803-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-101803 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0906 19:02:29.638570    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-101803 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.74800022s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-101803 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-101803
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-101803-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-101803-m02 --driver=docker  --container-runtime=containerd: exit status 14 (76.722425ms)

                                                
                                                
-- stdout --
	* [multinode-101803-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-101803-m02' is duplicated with machine name 'multinode-101803-m02' in profile 'multinode-101803'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-101803-m03 --driver=docker  --container-runtime=containerd
E0906 19:03:22.974053    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-101803-m03 --driver=docker  --container-runtime=containerd: (34.71811875s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-101803
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-101803: exit status 80 (700.995792ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-101803 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-101803-m03 already exists in multinode-101803-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-101803-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-101803-m03: (1.974445835s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.52s)

                                                
                                    
x
+
TestPreload (114.88s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-736149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
E0906 19:03:52.702335    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-736149 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m12.3947861s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-736149 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-736149 image pull gcr.io/k8s-minikube/busybox: (1.85684697s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-736149
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-736149: (12.061737332s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-736149 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-736149 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (25.666955419s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-736149 image list
helpers_test.go:175: Cleaning up "test-preload-736149" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-736149
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-736149: (2.545409949s)
--- PASS: TestPreload (114.88s)

                                                
                                    
x
+
TestScheduledStopUnix (108.37s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-783582 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-783582 --memory=2048 --driver=docker  --container-runtime=containerd: (32.300704993s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-783582 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-783582 -n scheduled-stop-783582
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-783582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-783582 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-783582 -n scheduled-stop-783582
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-783582
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-783582 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-783582
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-783582: exit status 7 (63.518762ms)

                                                
                                                
-- stdout --
	scheduled-stop-783582
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-783582 -n scheduled-stop-783582
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-783582 -n scheduled-stop-783582: exit status 7 (61.121803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-783582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-783582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-783582: (4.639455991s)
--- PASS: TestScheduledStopUnix (108.37s)

                                                
                                    
x
+
TestInsufficientStorage (10.5s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-714242 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0906 19:07:29.639093    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-714242 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (8.023804607s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"5ddbf474-2446-4890-bf6e-708e4c290086","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-714242] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ab0ef328-7eef-43df-b046-d43f076ebc3d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19576"}}
	{"specversion":"1.0","id":"c3906a91-be32-45c0-827a-b1402a632148","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ab78b941-c526-423e-9f41-08fbdc5e94ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig"}}
	{"specversion":"1.0","id":"9133a922-7128-4d98-959c-421ea5d70898","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube"}}
	{"specversion":"1.0","id":"d1e25abb-500c-4060-b79b-4c3771b3145d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4e933034-3972-487b-9977-3daf9ceab882","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4e151703-672f-4a4a-8cfe-f800bdf4447d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"64a4a776-3299-41a0-b3f4-7179f6e0aa52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0fa00acc-c45b-492e-a2eb-af7223d2e466","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"ee47f2d9-7a85-4cb9-8522-72cf1650dd52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"91f44c3c-3df3-434d-a14d-6fe4b33b3277","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-714242\" primary control-plane node in \"insufficient-storage-714242\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ea3b19cc-2dd1-4438-938c-7ca97444c161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"320b17c5-64b6-48a1-9aa4-08d2277f06d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7094f575-bbc4-4b80-b37c-50f7d5444c5f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-714242 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-714242 --output=json --layout=cluster: exit status 7 (283.881925ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-714242","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-714242","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:07:32.720358  155845 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-714242" does not appear in /home/jenkins/minikube-integration/19576-2243/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-714242 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-714242 --output=json --layout=cluster: exit status 7 (274.726983ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-714242","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-714242","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0906 19:07:32.996737  155907 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-714242" does not appear in /home/jenkins/minikube-integration/19576-2243/kubeconfig
	E0906 19:07:33.007194  155907 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/insufficient-storage-714242/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-714242" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-714242
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-714242: (1.915199323s)
--- PASS: TestInsufficientStorage (10.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (77.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3285099498 start -p running-upgrade-102048 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3285099498 start -p running-upgrade-102048 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (40.754413577s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-102048 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-102048 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (33.312705917s)
helpers_test.go:175: Cleaning up "running-upgrade-102048" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-102048
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-102048: (2.56270255s)
--- PASS: TestRunningBinaryUpgrade (77.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (105.04s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-773887 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-773887 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (58.756273373s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-773887
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-773887: (1.590731642s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-773887 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-773887 status --format={{.Host}}: exit status 7 (75.969467ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-773887 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-773887 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (31.677689687s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-773887 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-773887 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-773887 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (101.700358ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-773887] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-773887
	    minikube start -p kubernetes-upgrade-773887 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7738872 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-773887 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-773887 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-773887 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (9.922569536s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-773887" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-773887
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-773887: (2.700623027s)
--- PASS: TestKubernetesUpgrade (105.04s)

                                                
                                    
x
+
TestMissingContainerUpgrade (189.05s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1299442406 start -p missing-upgrade-772889 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1299442406 start -p missing-upgrade-772889 --memory=2200 --driver=docker  --container-runtime=containerd: (1m40.731309734s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-772889
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-772889: (10.290495429s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-772889
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-772889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-772889 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m10.071369532s)
helpers_test.go:175: Cleaning up "missing-upgrade-772889" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-772889
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-772889: (3.131547665s)
--- PASS: TestMissingContainerUpgrade (189.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824696 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-824696 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (76.763049ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-824696] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824696 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-824696 --driver=docker  --container-runtime=containerd: (38.309339648s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-824696 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824696 --no-kubernetes --driver=docker  --container-runtime=containerd
E0906 19:08:22.974494    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-824696 --no-kubernetes --driver=docker  --container-runtime=containerd: (16.055771714s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-824696 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-824696 status -o json: exit status 2 (398.40395ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-824696","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-824696
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-824696: (1.891932818s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824696 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-824696 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.375032506s)
--- PASS: TestNoKubernetes/serial/Start (9.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-824696 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-824696 "sudo systemctl is-active --quiet service kubelet": exit status 1 (341.916026ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-824696
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-824696: (1.214143627s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-824696 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-824696 --driver=docker  --container-runtime=containerd: (6.822317175s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-824696 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-824696 "sudo systemctl is-active --quiet service kubelet": exit status 1 (279.171964ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.29s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.29s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (121.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.436912397 start -p stopped-upgrade-109813 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.436912397 start -p stopped-upgrade-109813 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.917077554s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.436912397 -p stopped-upgrade-109813 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.436912397 -p stopped-upgrade-109813 stop: (22.01198056s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-109813 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-109813 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (55.479899728s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (121.41s)

                                                
                                    
x
+
TestPause/serial/Start (75.53s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-253053 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E0906 19:12:29.638445    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-253053 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m15.530566187s)
--- PASS: TestPause/serial/Start (75.53s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.64s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-109813
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-109813: (1.642431105s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.64s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.17s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-253053 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-253053 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.148827145s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.17s)

                                                
                                    
x
+
TestPause/serial/Pause (0.93s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-253053 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.93s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.46s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-253053 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-253053 --output=json --layout=cluster: exit status 2 (457.245473ms)

                                                
                                                
-- stdout --
	{"Name":"pause-253053","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-253053","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-631107 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-631107 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (215.390795ms)

                                                
                                                
-- stdout --
	* [false-631107] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19576
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0906 19:13:25.415228  191880 out.go:345] Setting OutFile to fd 1 ...
	I0906 19:13:25.415456  191880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:13:25.415488  191880 out.go:358] Setting ErrFile to fd 2...
	I0906 19:13:25.415507  191880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0906 19:13:25.415776  191880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19576-2243/.minikube/bin
	I0906 19:13:25.416227  191880 out.go:352] Setting JSON to false
	I0906 19:13:25.417214  191880 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":3354,"bootTime":1725646652,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1068-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0906 19:13:25.417308  191880 start.go:139] virtualization:  
	I0906 19:13:25.420170  191880 out.go:177] * [false-631107] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0906 19:13:25.422799  191880 out.go:177]   - MINIKUBE_LOCATION=19576
	I0906 19:13:25.422861  191880 notify.go:220] Checking for updates...
	I0906 19:13:25.427947  191880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0906 19:13:25.430009  191880 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19576-2243/kubeconfig
	I0906 19:13:25.432009  191880 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19576-2243/.minikube
	I0906 19:13:25.435116  191880 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0906 19:13:25.437361  191880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0906 19:13:25.440807  191880 config.go:182] Loaded profile config "pause-253053": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0906 19:13:25.440902  191880 driver.go:394] Setting default libvirt URI to qemu:///system
	I0906 19:13:25.477665  191880 docker.go:123] docker version: linux-27.2.0:Docker Engine - Community
	I0906 19:13:25.477777  191880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0906 19:13:25.548829  191880 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:51 SystemTime:2024-09-06 19:13:25.533308157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1068-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0906 19:13:25.548951  191880 docker.go:318] overlay module found
	I0906 19:13:25.550972  191880 out.go:177] * Using the docker driver based on user configuration
	I0906 19:13:25.552560  191880 start.go:297] selected driver: docker
	I0906 19:13:25.552575  191880 start.go:901] validating driver "docker" against <nil>
	I0906 19:13:25.552588  191880 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0906 19:13:25.554878  191880 out.go:201] 
	W0906 19:13:25.556868  191880 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0906 19:13:25.558715  191880 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-631107 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-631107" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19576-2243/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 06 Sep 2024 19:13:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-253053
contexts:
- context:
cluster: pause-253053
extensions:
- extension:
last-update: Fri, 06 Sep 2024 19:13:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-253053
name: pause-253053
current-context: pause-253053
kind: Config
preferences: {}
users:
- name: pause-253053
user:
client-certificate: /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/pause-253053/client.crt
client-key: /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/pause-253053/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-631107

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-631107"

                                                
                                                
----------------------- debugLogs end: false-631107 [took: 4.315490515s] --------------------------------
helpers_test.go:175: Cleaning up "false-631107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-631107
--- PASS: TestNetworkPlugins/group/false (5.09s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-253053 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.13s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-253053 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-253053 --alsologtostderr -v=5: (1.128638083s)
--- PASS: TestPause/serial/PauseAgain (1.13s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.99s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-253053 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-253053 --alsologtostderr -v=5: (2.985487471s)
--- PASS: TestPause/serial/DeletePaused (2.99s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-253053
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-253053: exit status 1 (22.00207ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-253053: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (171.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-057553 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0906 19:16:26.061099    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-057553 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m51.343337817s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (171.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-985607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0906 19:17:29.638526    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-985607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m10.100436796s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-057553 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [e6315dbd-801d-4345-9e88-24435ca0b73e] Pending
helpers_test.go:344: "busybox" [e6315dbd-801d-4345-9e88-24435ca0b73e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [e6315dbd-801d-4345-9e88-24435ca0b73e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.006051889s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-057553 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-057553 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-057553 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.543438379s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-057553 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-057553 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-057553 --alsologtostderr -v=3: (12.416389153s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-057553 -n old-k8s-version-057553
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-057553 -n old-k8s-version-057553: exit status 7 (124.99356ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-057553 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (377.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-057553 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0906 19:18:22.974007    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-057553 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (6m16.648450066s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-057553 -n old-k8s-version-057553
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (377.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-985607 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [50d07f5c-4989-45bd-a5b7-c822391f7b1a] Pending
helpers_test.go:344: "busybox" [50d07f5c-4989-45bd-a5b7-c822391f7b1a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [50d07f5c-4989-45bd-a5b7-c822391f7b1a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003680057s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-985607 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-985607 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-985607 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-985607 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-985607 --alsologtostderr -v=3: (12.137289181s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985607 -n no-preload-985607
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985607 -n no-preload-985607: exit status 7 (90.286839ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-985607 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-985607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0906 19:20:32.704645    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:22:29.638404    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:23:22.974147    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-985607 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m26.824029094s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-985607 -n no-preload-985607
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-s8pm8" [de99ac9a-e21e-4d54-8f6f-95fd627248a4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003596811s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-s8pm8" [de99ac9a-e21e-4d54-8f6f-95fd627248a4] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004530833s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-985607 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-985607 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-985607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-985607 -n no-preload-985607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-985607 -n no-preload-985607: exit status 2 (332.927868ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-985607 -n no-preload-985607
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-985607 -n no-preload-985607: exit status 2 (314.219666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-985607 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-985607 -n no-preload-985607
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-985607 -n no-preload-985607
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (49.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-238999 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-238999 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (49.552146797s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (49.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-l564q" [410b5657-7d1e-44dd-9c50-4f9ac6bce42c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003945065s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-l564q" [410b5657-7d1e-44dd-9c50-4f9ac6bce42c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003825396s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-057553 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-057553 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-057553 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-057553 -n old-k8s-version-057553
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-057553 -n old-k8s-version-057553: exit status 2 (326.33561ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-057553 -n old-k8s-version-057553
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-057553 -n old-k8s-version-057553: exit status 2 (378.513325ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-057553 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-057553 -n old-k8s-version-057553
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-057553 -n old-k8s-version-057553
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-238999 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a3a38b4d-57da-44a5-8ca4-deffeb7ce53b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a3a38b4d-57da-44a5-8ca4-deffeb7ce53b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003949171s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-238999 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-358550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-358550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m8.183885283s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (68.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-238999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-238999 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.369784712s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-238999 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-238999 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-238999 --alsologtostderr -v=3: (12.671065679s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.67s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-238999 -n embed-certs-238999
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-238999 -n embed-certs-238999: exit status 7 (91.299303ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-238999 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (281.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-238999 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-238999 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m41.508938954s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-238999 -n embed-certs-238999
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (281.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-358550 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [debd4841-d91b-446a-ba16-40a28767de9b] Pending
helpers_test.go:344: "busybox" [debd4841-d91b-446a-ba16-40a28767de9b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [debd4841-d91b-446a-ba16-40a28767de9b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003628502s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-358550 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-358550 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-358550 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-358550 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-358550 --alsologtostderr -v=3: (12.173166443s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550: exit status 7 (76.619609ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-358550 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-358550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0906 19:27:29.639061    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:38.523194    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:38.529675    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:38.541136    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:38.562742    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:38.604165    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:38.685571    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:38.847153    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:39.168795    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:39.811225    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:41.092580    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:43.654241    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:48.775985    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:27:59.017522    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:19.498917    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:22.974105    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:33.992281    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:33.998780    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:34.010223    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:34.033088    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:34.075973    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:34.157373    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:34.319201    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:34.641596    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:35.283077    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:36.564610    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:39.126803    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:44.248957    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:28:54.490700    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:29:00.460382    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:29:14.973876    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-358550 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m29.959978926s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (270.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-75pv7" [53c8ba8d-58dc-4606-9357-b3d1ee6885d5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003631322s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-75pv7" [53c8ba8d-58dc-4606-9357-b3d1ee6885d5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003424332s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-238999 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-238999 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-238999 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-238999 -n embed-certs-238999
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-238999 -n embed-certs-238999: exit status 2 (300.389975ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-238999 -n embed-certs-238999
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-238999 -n embed-certs-238999: exit status 2 (312.484298ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-238999 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-238999 -n embed-certs-238999
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-238999 -n embed-certs-238999
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.04s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (42.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-498117 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0906 19:29:55.935818    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:30:22.382186    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-498117 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (42.645529236s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (42.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-498117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-498117 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.483549108s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-498117 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-498117 --alsologtostderr -v=3: (1.379834337s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t9rrz" [8ab782a2-dfe1-48a9-8aa5-10c9f04660af] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004955508s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-498117 -n newest-cni-498117
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-498117 -n newest-cni-498117: exit status 7 (135.432711ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-498117 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-498117 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-498117 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (19.577774705s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-498117 -n newest-cni-498117
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-t9rrz" [8ab782a2-dfe1-48a9-8aa5-10c9f04660af] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005037255s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-358550 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-358550 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.69s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-358550 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-358550 --alsologtostderr -v=1: (1.177550092s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550: exit status 2 (492.607497ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550: exit status 2 (512.400547ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-358550 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-358550 --alsologtostderr -v=1: (1.185881971s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-358550 -n default-k8s-diff-port-358550
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (58.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (58.61330753s)
--- PASS: TestNetworkPlugins/group/auto/Start (58.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-498117 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-498117 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-498117 -n newest-cni-498117
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-498117 -n newest-cni-498117: exit status 2 (417.766925ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-498117 -n newest-cni-498117
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-498117 -n newest-cni-498117: exit status 2 (371.721278ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-498117 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-498117 -n newest-cni-498117
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-498117 -n newest-cni-498117
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.61s)
E0906 19:36:57.783796    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:57.790129    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:57.801545    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:57.823015    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:57.864420    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:57.945878    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:58.107563    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:58.429442    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:36:59.071839    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:00.353692    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:02.915246    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:06.168238    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:06.174715    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:06.186162    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:06.207662    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:06.249054    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:06.331100    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:06.492728    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (58.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0906 19:31:17.858447    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (58.670626316s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (58.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-631107 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-631107 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-8wgcn" [74759372-6268-4b77-a504-d728a9ea0157] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-8wgcn" [74759372-6268-4b77-a504-d728a9ea0157] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003524235s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2q7cc" [58543fd6-53b2-42bf-ad01-550cae554ccb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003705618s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-631107 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-631107 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-631107 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6qbkt" [4dbb7151-8422-4411-be94-f9bd69919da7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6qbkt" [4dbb7151-8422-4411-be94-f9bd69919da7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003980336s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-631107 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0906 19:32:38.522231    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (58.466774314s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0906 19:33:06.062817    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:33:06.223622    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/old-k8s-version-057553/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:33:22.973873    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m19.261549642s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t96m4" [08b4cd50-150b-465a-974e-7d78c2ddd267] Running
E0906 19:33:33.993040    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/no-preload-985607/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005071055s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-631107 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-631107 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-65hcc" [77a38024-0bce-4991-a6cf-cea471de2c0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-65hcc" [77a38024-0bce-4991-a6cf-cea471de2c0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004074102s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-631107 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m26.307205944s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-631107 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-631107 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ktj2l" [8e5e3d00-d038-4d13-a8a4-d7f19fcf386d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ktj2l" [8e5e3d00-d038-4d13-a8a4-d7f19fcf386d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004084698s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-631107 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m7.217220237s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-631107 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-631107 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-899lp" [39e589af-b128-43a8-911b-0ca5e513e3bd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-899lp" [39e589af-b128-43a8-911b-0ca5e513e3bd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004725967s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-631107 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vkb9v" [1ce3f458-44eb-4b4b-87ec-b21d1904cd56] Running
E0906 19:35:54.948877    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/default-k8s-diff-port-358550/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004786328s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-631107 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-631107 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zb4zl" [d69ee6d1-3e49-469e-84cb-1e823fa9688f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-zb4zl" [d69ee6d1-3e49-469e-84cb-1e823fa9688f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004201531s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0906 19:36:05.190514    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/default-k8s-diff-port-358550/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-631107 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m1.429772903s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-631107 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-631107 "pgrep -a kubelet"
E0906 19:37:06.633433    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/default-k8s-diff-port-358550/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:06.814388    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-631107 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gwkwd" [4ea4a772-7c3f-458c-ba88-14525f3ea383] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0906 19:37:07.456310    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:08.037842    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/auto-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:08.738386    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-gwkwd" [4ea4a772-7c3f-458c-ba88-14525f3ea383] Running
E0906 19:37:11.300375    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/kindnet-631107/client.crt: no such file or directory" logger="UnhandledError"
E0906 19:37:12.706461    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/functional-015911/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.00471737s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-631107 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-631107 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.56s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-126447 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-126447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-126447
--- SKIP: TestDownloadOnlyKic (0.56s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-805199" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-805199
--- SKIP: TestStartStop/group/disable-driver-mounts (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
E0906 19:13:22.974083    7647 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/addons-663433/client.crt: no such file or directory" logger="UnhandledError"
panic.go:626: 
----------------------- debugLogs start: kubenet-631107 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-631107" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19576-2243/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 06 Sep 2024 19:13:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.76.2:8443
name: pause-253053
contexts:
- context:
cluster: pause-253053
extensions:
- extension:
last-update: Fri, 06 Sep 2024 19:13:22 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-253053
name: pause-253053
current-context: pause-253053
kind: Config
preferences: {}
users:
- name: pause-253053
user:
client-certificate: /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/pause-253053/client.crt
client-key: /home/jenkins/minikube-integration/19576-2243/.minikube/profiles/pause-253053/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-631107

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-631107"

                                                
                                                
----------------------- debugLogs end: kubenet-631107 [took: 3.901902915s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-631107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-631107
--- SKIP: TestNetworkPlugins/group/kubenet (4.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-631107 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-631107" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-631107

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-631107" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-631107"

                                                
                                                
----------------------- debugLogs end: cilium-631107 [took: 4.878503417s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-631107" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-631107
--- SKIP: TestNetworkPlugins/group/cilium (5.06s)

                                                
                                    
Copied to clipboard