Test Report: Docker_Linux_containerd_arm64 19478

                    
                      cdbac7a92b6ef0941d2ffc9877dc4d64cf2ec5e1:2024-08-19:35858
                    
                

Test fail (1/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.91
x
+
TestAddons/serial/Volcano (199.91s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 36.738559ms
addons_test.go:897: volcano-scheduler stabilized in 36.929771ms
addons_test.go:913: volcano-controller stabilized in 36.99684ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-n9f8l" [3c628a79-0772-40b4-9038-44968c862c17] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003953261s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-b87q7" [c8c198d0-ed58-430c-a357-1f75dd324f6c] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003847667s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-z6zwg" [fe360269-618e-4cb7-8fbf-a4bada8ea3b1] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.004367557s
addons_test.go:932: (dbg) Run:  kubectl --context addons-726932 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-726932 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-726932 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [97085f91-82d4-45e6-b3c8-1791a1835254] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-726932 -n addons-726932
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-08-19 17:56:53.440694447 +0000 UTC m=+429.345003402
addons_test.go:964: (dbg) Run:  kubectl --context addons-726932 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-726932 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-ce91e926-5881-4dcd-9e9e-adb6a1879cc0
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-flj5q (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-flj5q:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-726932 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-726932 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-726932
helpers_test.go:235: (dbg) docker inspect addons-726932:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2739ce21c4c59eabf03e1e12e96561fad2f92e240358f64f9e18c2a94c336551",
	        "Created": "2024-08-19T17:50:26.092819106Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 301295,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-08-19T17:50:26.235234835Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:1082065554095668b21dfc58cfca3febbc96bb8424fcaec6e38d6ee040df84c8",
	        "ResolvConfPath": "/var/lib/docker/containers/2739ce21c4c59eabf03e1e12e96561fad2f92e240358f64f9e18c2a94c336551/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2739ce21c4c59eabf03e1e12e96561fad2f92e240358f64f9e18c2a94c336551/hostname",
	        "HostsPath": "/var/lib/docker/containers/2739ce21c4c59eabf03e1e12e96561fad2f92e240358f64f9e18c2a94c336551/hosts",
	        "LogPath": "/var/lib/docker/containers/2739ce21c4c59eabf03e1e12e96561fad2f92e240358f64f9e18c2a94c336551/2739ce21c4c59eabf03e1e12e96561fad2f92e240358f64f9e18c2a94c336551-json.log",
	        "Name": "/addons-726932",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-726932:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-726932",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/376e851233574916b8b8dd44238620ac97245a3eee4f368780fd1eda3c5b832c-init/diff:/var/lib/docker/overlay2/94ffea5601e2a1ebddf2686ff6b40550a3058c3ee58cccc28c80433b442a091b/diff",
	                "MergedDir": "/var/lib/docker/overlay2/376e851233574916b8b8dd44238620ac97245a3eee4f368780fd1eda3c5b832c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/376e851233574916b8b8dd44238620ac97245a3eee4f368780fd1eda3c5b832c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/376e851233574916b8b8dd44238620ac97245a3eee4f368780fd1eda3c5b832c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-726932",
	                "Source": "/var/lib/docker/volumes/addons-726932/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-726932",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-726932",
	                "name.minikube.sigs.k8s.io": "addons-726932",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d703be58c7808f15feb8de882afd26465eb7d940ee41d943fe80d0e01027a32",
	            "SandboxKey": "/var/run/docker/netns/2d703be58c78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-726932": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "1330cba0d21dfde2abd68d926d40183602d2457e99fff9378e241add55847fb9",
	                    "EndpointID": "dd4faf108ab69f62f4fce4eb5df3a9f88efde31ea49c9585c1d665e910bbb229",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-726932",
	                        "2739ce21c4c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-726932 -n addons-726932
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-726932 logs -n 25: (1.638217224s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-731537   | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC |                     |
	|         | -p download-only-731537              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC | 19 Aug 24 17:49 UTC |
	| delete  | -p download-only-731537              | download-only-731537   | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC | 19 Aug 24 17:49 UTC |
	| start   | -o=json --download-only              | download-only-221522   | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC |                     |
	|         | -p download-only-221522              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	| delete  | -p download-only-221522              | download-only-221522   | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	| delete  | -p download-only-731537              | download-only-731537   | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	| delete  | -p download-only-221522              | download-only-221522   | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	| start   | --download-only -p                   | download-docker-696457 | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC |                     |
	|         | download-docker-696457               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-696457            | download-docker-696457 | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	| start   | --download-only -p                   | binary-mirror-936896   | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC |                     |
	|         | binary-mirror-936896                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:38531               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-936896              | binary-mirror-936896   | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:50 UTC |
	| addons  | enable dashboard -p                  | addons-726932          | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC |                     |
	|         | addons-726932                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-726932          | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC |                     |
	|         | addons-726932                        |                        |         |         |                     |                     |
	| start   | -p addons-726932 --wait=true         | addons-726932          | jenkins | v1.33.1 | 19 Aug 24 17:50 UTC | 19 Aug 24 17:53 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:50:02
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:50:02.526556  300803 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:50:02.526743  300803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:50:02.526755  300803 out.go:358] Setting ErrFile to fd 2...
	I0819 17:50:02.526761  300803 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:50:02.527018  300803 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 17:50:02.527496  300803 out.go:352] Setting JSON to false
	I0819 17:50:02.528389  300803 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5543,"bootTime":1724084260,"procs":166,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0819 17:50:02.528464  300803 start.go:139] virtualization:  
	I0819 17:50:02.531190  300803 out.go:177] * [addons-726932] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 17:50:02.534150  300803 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 17:50:02.534291  300803 notify.go:220] Checking for updates...
	I0819 17:50:02.538664  300803 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:50:02.540719  300803 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	I0819 17:50:02.542807  300803 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	I0819 17:50:02.544534  300803 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 17:50:02.546592  300803 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 17:50:02.548797  300803 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:50:02.569826  300803 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:50:02.569951  300803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:50:02.638271  300803 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:50:02.628971289 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:50:02.638394  300803 docker.go:307] overlay module found
	I0819 17:50:02.640997  300803 out.go:177] * Using the docker driver based on user configuration
	I0819 17:50:02.642827  300803 start.go:297] selected driver: docker
	I0819 17:50:02.642850  300803 start.go:901] validating driver "docker" against <nil>
	I0819 17:50:02.642870  300803 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 17:50:02.643645  300803 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:50:02.700421  300803 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:50:02.691413667 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:50:02.700588  300803 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:50:02.700819  300803 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:50:02.703001  300803 out.go:177] * Using Docker driver with root privileges
	I0819 17:50:02.705167  300803 cni.go:84] Creating CNI manager for ""
	I0819 17:50:02.705185  300803 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 17:50:02.705196  300803 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:50:02.705276  300803 start.go:340] cluster config:
	{Name:addons-726932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-726932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:50:02.707540  300803 out.go:177] * Starting "addons-726932" primary control-plane node in "addons-726932" cluster
	I0819 17:50:02.709558  300803 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 17:50:02.711507  300803 out.go:177] * Pulling base image v0.0.44-1724062045-19478 ...
	I0819 17:50:02.713963  300803 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 17:50:02.714025  300803 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 17:50:02.714040  300803 cache.go:56] Caching tarball of preloaded images
	I0819 17:50:02.714048  300803 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local docker daemon
	I0819 17:50:02.714131  300803 preload.go:172] Found /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0819 17:50:02.714141  300803 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0819 17:50:02.714501  300803 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/config.json ...
	I0819 17:50:02.714522  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/config.json: {Name:mkb75df185f0a45993e6f04604cab52efba65732 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:02.729047  300803 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:50:02.729167  300803 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory
	I0819 17:50:02.729191  300803 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory, skipping pull
	I0819 17:50:02.729199  300803 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b exists in cache, skipping pull
	I0819 17:50:02.729215  300803 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b as a tarball
	I0819 17:50:02.729227  300803 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b from local cache
	I0819 17:50:19.556417  300803 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b from cached tarball
	I0819 17:50:19.556456  300803 cache.go:194] Successfully downloaded all kic artifacts
	I0819 17:50:19.556499  300803 start.go:360] acquireMachinesLock for addons-726932: {Name:mk8cc8937da19528b87ce63fd8a9a9aec5a0707b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0819 17:50:19.556628  300803 start.go:364] duration metric: took 104.533µs to acquireMachinesLock for "addons-726932"
	I0819 17:50:19.556666  300803 start.go:93] Provisioning new machine with config: &{Name:addons-726932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-726932 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 17:50:19.556794  300803 start.go:125] createHost starting for "" (driver="docker")
	I0819 17:50:19.558977  300803 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0819 17:50:19.559218  300803 start.go:159] libmachine.API.Create for "addons-726932" (driver="docker")
	I0819 17:50:19.559251  300803 client.go:168] LocalClient.Create starting
	I0819 17:50:19.559353  300803 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca.pem
	I0819 17:50:20.005162  300803 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/cert.pem
	I0819 17:50:20.221722  300803 cli_runner.go:164] Run: docker network inspect addons-726932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0819 17:50:20.243765  300803 cli_runner.go:211] docker network inspect addons-726932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0819 17:50:20.243863  300803 network_create.go:284] running [docker network inspect addons-726932] to gather additional debugging logs...
	I0819 17:50:20.243885  300803 cli_runner.go:164] Run: docker network inspect addons-726932
	W0819 17:50:20.258081  300803 cli_runner.go:211] docker network inspect addons-726932 returned with exit code 1
	I0819 17:50:20.258115  300803 network_create.go:287] error running [docker network inspect addons-726932]: docker network inspect addons-726932: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-726932 not found
	I0819 17:50:20.258128  300803 network_create.go:289] output of [docker network inspect addons-726932]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-726932 not found
	
	** /stderr **
	I0819 17:50:20.258229  300803 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:50:20.273552  300803 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017d5e50}
	I0819 17:50:20.273602  300803 network_create.go:124] attempt to create docker network addons-726932 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0819 17:50:20.273662  300803 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-726932 addons-726932
	I0819 17:50:20.343909  300803 network_create.go:108] docker network addons-726932 192.168.49.0/24 created
	I0819 17:50:20.343947  300803 kic.go:121] calculated static IP "192.168.49.2" for the "addons-726932" container
	I0819 17:50:20.344048  300803 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0819 17:50:20.358404  300803 cli_runner.go:164] Run: docker volume create addons-726932 --label name.minikube.sigs.k8s.io=addons-726932 --label created_by.minikube.sigs.k8s.io=true
	I0819 17:50:20.375633  300803 oci.go:103] Successfully created a docker volume addons-726932
	I0819 17:50:20.375733  300803 cli_runner.go:164] Run: docker run --rm --name addons-726932-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-726932 --entrypoint /usr/bin/test -v addons-726932:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -d /var/lib
	I0819 17:50:21.941541  300803 cli_runner.go:217] Completed: docker run --rm --name addons-726932-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-726932 --entrypoint /usr/bin/test -v addons-726932:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -d /var/lib: (1.565766164s)
	I0819 17:50:21.941573  300803 oci.go:107] Successfully prepared a docker volume addons-726932
	I0819 17:50:21.941591  300803 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 17:50:21.941612  300803 kic.go:194] Starting extracting preloaded images to volume ...
	I0819 17:50:21.941730  300803 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-726932:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -I lz4 -xf /preloaded.tar -C /extractDir
	I0819 17:50:26.025386  300803 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-726932:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b -I lz4 -xf /preloaded.tar -C /extractDir: (4.083612644s)
	I0819 17:50:26.025419  300803 kic.go:203] duration metric: took 4.083804487s to extract preloaded images to volume ...
	W0819 17:50:26.025586  300803 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0819 17:50:26.025776  300803 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0819 17:50:26.078260  300803 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-726932 --name addons-726932 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-726932 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-726932 --network addons-726932 --ip 192.168.49.2 --volume addons-726932:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b
	I0819 17:50:26.418048  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Running}}
	I0819 17:50:26.439539  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:26.465503  300803 cli_runner.go:164] Run: docker exec addons-726932 stat /var/lib/dpkg/alternatives/iptables
	I0819 17:50:26.544206  300803 oci.go:144] the created container "addons-726932" has a running status.
	I0819 17:50:26.544239  300803 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa...
	I0819 17:50:27.461191  300803 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0819 17:50:27.482232  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:27.501444  300803 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0819 17:50:27.501521  300803 kic_runner.go:114] Args: [docker exec --privileged addons-726932 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0819 17:50:27.560388  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:27.582571  300803 machine.go:93] provisionDockerMachine start ...
	I0819 17:50:27.582667  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:27.601057  300803 main.go:141] libmachine: Using SSH client type: native
	I0819 17:50:27.601327  300803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I0819 17:50:27.601335  300803 main.go:141] libmachine: About to run SSH command:
	hostname
	I0819 17:50:27.733106  300803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-726932
	
	I0819 17:50:27.733133  300803 ubuntu.go:169] provisioning hostname "addons-726932"
	I0819 17:50:27.733199  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:27.752967  300803 main.go:141] libmachine: Using SSH client type: native
	I0819 17:50:27.753216  300803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I0819 17:50:27.753234  300803 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-726932 && echo "addons-726932" | sudo tee /etc/hostname
	I0819 17:50:27.901952  300803 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-726932
	
	I0819 17:50:27.902040  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:27.919191  300803 main.go:141] libmachine: Using SSH client type: native
	I0819 17:50:27.919453  300803 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3e49d0] 0x3e7230 <nil>  [] 0s} 127.0.0.1 33141 <nil> <nil>}
	I0819 17:50:27.919474  300803 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-726932' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-726932/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-726932' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0819 17:50:28.054184  300803 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0819 17:50:28.054219  300803 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19478-294620/.minikube CaCertPath:/home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19478-294620/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19478-294620/.minikube}
	I0819 17:50:28.054241  300803 ubuntu.go:177] setting up certificates
	I0819 17:50:28.054251  300803 provision.go:84] configureAuth start
	I0819 17:50:28.054319  300803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-726932
	I0819 17:50:28.071059  300803 provision.go:143] copyHostCerts
	I0819 17:50:28.071158  300803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19478-294620/.minikube/key.pem (1675 bytes)
	I0819 17:50:28.071288  300803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19478-294620/.minikube/ca.pem (1078 bytes)
	I0819 17:50:28.071356  300803 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19478-294620/.minikube/cert.pem (1123 bytes)
	I0819 17:50:28.071412  300803 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19478-294620/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca-key.pem org=jenkins.addons-726932 san=[127.0.0.1 192.168.49.2 addons-726932 localhost minikube]
	I0819 17:50:28.599911  300803 provision.go:177] copyRemoteCerts
	I0819 17:50:28.599986  300803 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0819 17:50:28.600032  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:28.617396  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:28.714642  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0819 17:50:28.740539  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0819 17:50:28.765154  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0819 17:50:28.789781  300803 provision.go:87] duration metric: took 735.515619ms to configureAuth
	I0819 17:50:28.789810  300803 ubuntu.go:193] setting minikube options for container-runtime
	I0819 17:50:28.790010  300803 config.go:182] Loaded profile config "addons-726932": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 17:50:28.790023  300803 machine.go:96] duration metric: took 1.207433475s to provisionDockerMachine
	I0819 17:50:28.790035  300803 client.go:171] duration metric: took 9.230769909s to LocalClient.Create
	I0819 17:50:28.790054  300803 start.go:167] duration metric: took 9.230836362s to libmachine.API.Create "addons-726932"
	I0819 17:50:28.790066  300803 start.go:293] postStartSetup for "addons-726932" (driver="docker")
	I0819 17:50:28.790076  300803 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0819 17:50:28.790126  300803 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0819 17:50:28.790170  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:28.807068  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:28.906945  300803 ssh_runner.go:195] Run: cat /etc/os-release
	I0819 17:50:28.910152  300803 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0819 17:50:28.910197  300803 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0819 17:50:28.910214  300803 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0819 17:50:28.910221  300803 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0819 17:50:28.910232  300803 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-294620/.minikube/addons for local assets ...
	I0819 17:50:28.910302  300803 filesync.go:126] Scanning /home/jenkins/minikube-integration/19478-294620/.minikube/files for local assets ...
	I0819 17:50:28.910331  300803 start.go:296] duration metric: took 120.258793ms for postStartSetup
	I0819 17:50:28.910649  300803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-726932
	I0819 17:50:28.926646  300803 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/config.json ...
	I0819 17:50:28.926944  300803 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 17:50:28.927002  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:28.943204  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:29.034570  300803 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0819 17:50:29.038975  300803 start.go:128] duration metric: took 9.482164899s to createHost
	I0819 17:50:29.039004  300803 start.go:83] releasing machines lock for "addons-726932", held for 9.482364479s
	I0819 17:50:29.039077  300803 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-726932
	I0819 17:50:29.055255  300803 ssh_runner.go:195] Run: cat /version.json
	I0819 17:50:29.055278  300803 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0819 17:50:29.055307  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:29.055350  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:29.072476  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:29.077794  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:29.169908  300803 ssh_runner.go:195] Run: systemctl --version
	I0819 17:50:29.301581  300803 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0819 17:50:29.305784  300803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0819 17:50:29.330739  300803 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0819 17:50:29.330829  300803 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0819 17:50:29.360317  300803 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0819 17:50:29.360392  300803 start.go:495] detecting cgroup driver to use...
	I0819 17:50:29.360443  300803 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0819 17:50:29.360520  300803 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0819 17:50:29.373365  300803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0819 17:50:29.385198  300803 docker.go:217] disabling cri-docker service (if available) ...
	I0819 17:50:29.385260  300803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0819 17:50:29.398677  300803 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0819 17:50:29.413011  300803 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0819 17:50:29.492830  300803 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0819 17:50:29.594513  300803 docker.go:233] disabling docker service ...
	I0819 17:50:29.594604  300803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0819 17:50:29.615506  300803 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0819 17:50:29.627681  300803 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0819 17:50:29.714103  300803 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0819 17:50:29.809344  300803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0819 17:50:29.820619  300803 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0819 17:50:29.836421  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0819 17:50:29.846939  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0819 17:50:29.857268  300803 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0819 17:50:29.857339  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0819 17:50:29.867736  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 17:50:29.878286  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0819 17:50:29.888465  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0819 17:50:29.900677  300803 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0819 17:50:29.909514  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0819 17:50:29.919245  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0819 17:50:29.929042  300803 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0819 17:50:29.939041  300803 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0819 17:50:29.948840  300803 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0819 17:50:29.957435  300803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:50:30.092327  300803 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0819 17:50:30.244368  300803 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0819 17:50:30.244470  300803 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0819 17:50:30.248569  300803 start.go:563] Will wait 60s for crictl version
	I0819 17:50:30.248641  300803 ssh_runner.go:195] Run: which crictl
	I0819 17:50:30.252395  300803 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0819 17:50:30.293920  300803 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.20
	RuntimeApiVersion:  v1
	I0819 17:50:30.294062  300803 ssh_runner.go:195] Run: containerd --version
	I0819 17:50:30.316537  300803 ssh_runner.go:195] Run: containerd --version
	I0819 17:50:30.343254  300803 out.go:177] * Preparing Kubernetes v1.31.0 on containerd 1.7.20 ...
	I0819 17:50:30.345416  300803 cli_runner.go:164] Run: docker network inspect addons-726932 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0819 17:50:30.361379  300803 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0819 17:50:30.365169  300803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:50:30.376405  300803 kubeadm.go:883] updating cluster {Name:addons-726932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-726932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0819 17:50:30.376536  300803 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 17:50:30.376605  300803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:50:30.414273  300803 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 17:50:30.414302  300803 containerd.go:534] Images already preloaded, skipping extraction
	I0819 17:50:30.414365  300803 ssh_runner.go:195] Run: sudo crictl images --output json
	I0819 17:50:30.457817  300803 containerd.go:627] all images are preloaded for containerd runtime.
	I0819 17:50:30.457840  300803 cache_images.go:84] Images are preloaded, skipping loading
	I0819 17:50:30.457848  300803 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.0 containerd true true} ...
	I0819 17:50:30.457954  300803 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-726932 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0 ClusterName:addons-726932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0819 17:50:30.458020  300803 ssh_runner.go:195] Run: sudo crictl info
	I0819 17:50:30.494363  300803 cni.go:84] Creating CNI manager for ""
	I0819 17:50:30.494388  300803 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 17:50:30.494398  300803 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0819 17:50:30.494422  300803 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-726932 NodeName:addons-726932 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0819 17:50:30.494565  300803 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-726932"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0819 17:50:30.494640  300803 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0
	I0819 17:50:30.503162  300803 binaries.go:44] Found k8s binaries, skipping transfer
	I0819 17:50:30.503255  300803 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0819 17:50:30.512214  300803 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0819 17:50:30.529624  300803 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0819 17:50:30.548169  300803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0819 17:50:30.565943  300803 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0819 17:50:30.569118  300803 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0819 17:50:30.579266  300803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:50:30.660251  300803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:50:30.676089  300803 certs.go:68] Setting up /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932 for IP: 192.168.49.2
	I0819 17:50:30.676113  300803 certs.go:194] generating shared ca certs ...
	I0819 17:50:30.676130  300803 certs.go:226] acquiring lock for ca certs: {Name:mk194d7dd711c221104fedc68783d938981c915d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:30.676334  300803 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19478-294620/.minikube/ca.key
	I0819 17:50:30.878889  300803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-294620/.minikube/ca.crt ...
	I0819 17:50:30.878921  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/ca.crt: {Name:mkb0848bfebcf5742612b9371bdd52e5f40b52d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:30.879621  300803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-294620/.minikube/ca.key ...
	I0819 17:50:30.879638  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/ca.key: {Name:mk79193e31be744b0641bb5242c3099c1cc16ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:30.879730  300803 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19478-294620/.minikube/proxy-client-ca.key
	I0819 17:50:31.178073  300803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-294620/.minikube/proxy-client-ca.crt ...
	I0819 17:50:31.178102  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/proxy-client-ca.crt: {Name:mk77cae077592f47af8986884f8905653fff7bd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:31.178282  300803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-294620/.minikube/proxy-client-ca.key ...
	I0819 17:50:31.178295  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/proxy-client-ca.key: {Name:mk1e06f7e1065a6ed85328332ff8286768039c19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:31.178878  300803 certs.go:256] generating profile certs ...
	I0819 17:50:31.178947  300803 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.key
	I0819 17:50:31.178965  300803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt with IP's: []
	I0819 17:50:31.498539  300803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt ...
	I0819 17:50:31.498571  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: {Name:mk5dd380c1f680dc23b4cb396680edb47b1c077e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:31.498780  300803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.key ...
	I0819 17:50:31.498794  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.key: {Name:mk917c7002623925e88f94ef68bcd41489f39964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:31.499451  300803 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.key.316e34e9
	I0819 17:50:31.499480  300803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.crt.316e34e9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0819 17:50:31.699845  300803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.crt.316e34e9 ...
	I0819 17:50:31.699880  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.crt.316e34e9: {Name:mk139e1d90910327dc5401add07d84bedf983405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:31.700064  300803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.key.316e34e9 ...
	I0819 17:50:31.700077  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.key.316e34e9: {Name:mk116348887b7eecfd68649c810bbbc1b100ae8c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:31.700650  300803 certs.go:381] copying /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.crt.316e34e9 -> /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.crt
	I0819 17:50:31.700741  300803 certs.go:385] copying /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.key.316e34e9 -> /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.key
	I0819 17:50:31.700795  300803 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/proxy-client.key
	I0819 17:50:31.700817  300803 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/proxy-client.crt with IP's: []
	I0819 17:50:32.162615  300803 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/proxy-client.crt ...
	I0819 17:50:32.162645  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/proxy-client.crt: {Name:mk172305f3ded5f7dd1d0776518e2f6f54d75007 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:32.163417  300803 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/proxy-client.key ...
	I0819 17:50:32.163437  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/proxy-client.key: {Name:mk3025f3d028b34164935cc10a709264d51b1486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:32.163652  300803 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca-key.pem (1679 bytes)
	I0819 17:50:32.163696  300803 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/ca.pem (1078 bytes)
	I0819 17:50:32.163725  300803 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/cert.pem (1123 bytes)
	I0819 17:50:32.163757  300803 certs.go:484] found cert: /home/jenkins/minikube-integration/19478-294620/.minikube/certs/key.pem (1675 bytes)
	I0819 17:50:32.164415  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0819 17:50:32.189063  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0819 17:50:32.215977  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0819 17:50:32.239861  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0819 17:50:32.264110  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0819 17:50:32.290138  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0819 17:50:32.313975  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0819 17:50:32.338383  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0819 17:50:32.363462  300803 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19478-294620/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0819 17:50:32.388424  300803 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0819 17:50:32.406715  300803 ssh_runner.go:195] Run: openssl version
	I0819 17:50:32.412285  300803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0819 17:50:32.421921  300803 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:50:32.425414  300803 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 19 17:50 /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:50:32.425525  300803 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0819 17:50:32.432590  300803 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0819 17:50:32.442230  300803 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0819 17:50:32.445778  300803 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0819 17:50:32.445828  300803 kubeadm.go:392] StartCluster: {Name:addons-726932 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:addons-726932 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:50:32.445943  300803 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0819 17:50:32.446006  300803 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0819 17:50:32.488566  300803 cri.go:89] found id: ""
	I0819 17:50:32.488638  300803 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0819 17:50:32.497842  300803 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0819 17:50:32.506919  300803 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0819 17:50:32.507031  300803 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0819 17:50:32.516216  300803 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0819 17:50:32.516239  300803 kubeadm.go:157] found existing configuration files:
	
	I0819 17:50:32.516315  300803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0819 17:50:32.525646  300803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0819 17:50:32.525735  300803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0819 17:50:32.535111  300803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0819 17:50:32.544288  300803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0819 17:50:32.544452  300803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0819 17:50:32.553289  300803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0819 17:50:32.562620  300803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0819 17:50:32.562752  300803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0819 17:50:32.571583  300803 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0819 17:50:32.580846  300803 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0819 17:50:32.580936  300803 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0819 17:50:32.589785  300803 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0819 17:50:32.631074  300803 kubeadm.go:310] [init] Using Kubernetes version: v1.31.0
	I0819 17:50:32.631310  300803 kubeadm.go:310] [preflight] Running pre-flight checks
	I0819 17:50:32.647994  300803 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0819 17:50:32.648104  300803 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1067-aws
	I0819 17:50:32.648171  300803 kubeadm.go:310] OS: Linux
	I0819 17:50:32.648235  300803 kubeadm.go:310] CGROUPS_CPU: enabled
	I0819 17:50:32.648299  300803 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0819 17:50:32.648372  300803 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0819 17:50:32.648444  300803 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0819 17:50:32.648505  300803 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0819 17:50:32.648607  300803 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0819 17:50:32.648689  300803 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0819 17:50:32.648772  300803 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0819 17:50:32.648851  300803 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0819 17:50:32.713744  300803 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0819 17:50:32.713893  300803 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0819 17:50:32.713989  300803 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0819 17:50:32.719013  300803 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0819 17:50:32.721659  300803 out.go:235]   - Generating certificates and keys ...
	I0819 17:50:32.721825  300803 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0819 17:50:32.721888  300803 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0819 17:50:33.207568  300803 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0819 17:50:33.515368  300803 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0819 17:50:34.175012  300803 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0819 17:50:35.205108  300803 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0819 17:50:35.328825  300803 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0819 17:50:35.329148  300803 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-726932 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:50:35.790478  300803 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0819 17:50:35.790801  300803 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-726932 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0819 17:50:36.183345  300803 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0819 17:50:37.320162  300803 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0819 17:50:37.816714  300803 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0819 17:50:37.817032  300803 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0819 17:50:38.176828  300803 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0819 17:50:38.612272  300803 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0819 17:50:38.748684  300803 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0819 17:50:39.233006  300803 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0819 17:50:40.349301  300803 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0819 17:50:40.350021  300803 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0819 17:50:40.353611  300803 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0819 17:50:40.355912  300803 out.go:235]   - Booting up control plane ...
	I0819 17:50:40.356043  300803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0819 17:50:40.356128  300803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0819 17:50:40.359071  300803 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0819 17:50:40.372875  300803 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0819 17:50:40.379265  300803 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0819 17:50:40.379532  300803 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0819 17:50:40.474555  300803 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0819 17:50:40.474911  300803 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0819 17:50:41.977042  300803 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501728592s
	I0819 17:50:41.977136  300803 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0819 17:50:47.977858  300803 kubeadm.go:310] [api-check] The API server is healthy after 6.001139531s
	I0819 17:50:47.997191  300803 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0819 17:50:48.022968  300803 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0819 17:50:48.047415  300803 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0819 17:50:48.047618  300803 kubeadm.go:310] [mark-control-plane] Marking the node addons-726932 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0819 17:50:48.061533  300803 kubeadm.go:310] [bootstrap-token] Using token: 5djhtl.2ewyxt5azfnsmgoq
	I0819 17:50:48.063572  300803 out.go:235]   - Configuring RBAC rules ...
	I0819 17:50:48.063699  300803 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0819 17:50:48.070052  300803 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0819 17:50:48.079085  300803 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0819 17:50:48.083289  300803 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0819 17:50:48.088283  300803 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0819 17:50:48.094455  300803 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0819 17:50:48.384877  300803 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0819 17:50:48.824646  300803 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0819 17:50:49.384469  300803 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0819 17:50:49.385648  300803 kubeadm.go:310] 
	I0819 17:50:49.385753  300803 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0819 17:50:49.385762  300803 kubeadm.go:310] 
	I0819 17:50:49.385837  300803 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0819 17:50:49.385842  300803 kubeadm.go:310] 
	I0819 17:50:49.385867  300803 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0819 17:50:49.385923  300803 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0819 17:50:49.385972  300803 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0819 17:50:49.385977  300803 kubeadm.go:310] 
	I0819 17:50:49.386029  300803 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0819 17:50:49.386033  300803 kubeadm.go:310] 
	I0819 17:50:49.386082  300803 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0819 17:50:49.386087  300803 kubeadm.go:310] 
	I0819 17:50:49.386137  300803 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0819 17:50:49.386219  300803 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0819 17:50:49.386286  300803 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0819 17:50:49.386290  300803 kubeadm.go:310] 
	I0819 17:50:49.386372  300803 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0819 17:50:49.386445  300803 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0819 17:50:49.386451  300803 kubeadm.go:310] 
	I0819 17:50:49.386531  300803 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 5djhtl.2ewyxt5azfnsmgoq \
	I0819 17:50:49.386639  300803 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6201bf9a3a6b63add1a1a1230389027105452ef6e2da7adf933b175951954da2 \
	I0819 17:50:49.386893  300803 kubeadm.go:310] 	--control-plane 
	I0819 17:50:49.386920  300803 kubeadm.go:310] 
	I0819 17:50:49.387022  300803 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0819 17:50:49.387033  300803 kubeadm.go:310] 
	I0819 17:50:49.387131  300803 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 5djhtl.2ewyxt5azfnsmgoq \
	I0819 17:50:49.387254  300803 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:6201bf9a3a6b63add1a1a1230389027105452ef6e2da7adf933b175951954da2 
	I0819 17:50:49.390730  300803 kubeadm.go:310] W0819 17:50:32.627513    1019 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:50:49.391024  300803 kubeadm.go:310] W0819 17:50:32.628547    1019 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0819 17:50:49.391238  300803 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1067-aws\n", err: exit status 1
	I0819 17:50:49.391342  300803 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0819 17:50:49.391366  300803 cni.go:84] Creating CNI manager for ""
	I0819 17:50:49.391379  300803 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 17:50:49.394985  300803 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0819 17:50:49.396931  300803 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0819 17:50:49.400686  300803 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.0/kubectl ...
	I0819 17:50:49.400707  300803 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0819 17:50:49.425872  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0819 17:50:49.694033  300803 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0819 17:50:49.694172  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:49.694257  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-726932 minikube.k8s.io/updated_at=2024_08_19T17_50_49_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517 minikube.k8s.io/name=addons-726932 minikube.k8s.io/primary=true
	I0819 17:50:49.708240  300803 ops.go:34] apiserver oom_adj: -16
	I0819 17:50:49.826440  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:50.326556  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:50.826816  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:51.326576  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:51.827554  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:52.326551  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:52.826898  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:53.326798  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:53.827437  300803 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0819 17:50:54.030249  300803 kubeadm.go:1113] duration metric: took 4.33612452s to wait for elevateKubeSystemPrivileges
	I0819 17:50:54.030285  300803 kubeadm.go:394] duration metric: took 21.584460646s to StartCluster
	I0819 17:50:54.030303  300803 settings.go:142] acquiring lock: {Name:mk14a479dbf0fef5ca06f1b54566a9669f07c89c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:54.030430  300803 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19478-294620/kubeconfig
	I0819 17:50:54.030804  300803 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/kubeconfig: {Name:mkbe5edb27c567be1e28dd456fba3d0e47a85699 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:50:54.031011  300803 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0819 17:50:54.031197  300803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0819 17:50:54.031506  300803 config.go:182] Loaded profile config "addons-726932": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 17:50:54.031574  300803 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0819 17:50:54.031652  300803 addons.go:69] Setting yakd=true in profile "addons-726932"
	I0819 17:50:54.031691  300803 addons.go:234] Setting addon yakd=true in "addons-726932"
	I0819 17:50:54.031718  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.032183  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.032816  300803 addons.go:69] Setting metrics-server=true in profile "addons-726932"
	I0819 17:50:54.032847  300803 addons.go:234] Setting addon metrics-server=true in "addons-726932"
	I0819 17:50:54.032885  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.033318  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.034943  300803 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-726932"
	I0819 17:50:54.035413  300803 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-726932"
	I0819 17:50:54.035489  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.035202  300803 addons.go:69] Setting registry=true in profile "addons-726932"
	I0819 17:50:54.036236  300803 addons.go:234] Setting addon registry=true in "addons-726932"
	I0819 17:50:54.036282  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.036718  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.037083  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.035212  300803 addons.go:69] Setting storage-provisioner=true in profile "addons-726932"
	I0819 17:50:54.042388  300803 addons.go:234] Setting addon storage-provisioner=true in "addons-726932"
	I0819 17:50:54.042439  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.042953  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.035226  300803 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-726932"
	I0819 17:50:54.044114  300803 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-726932"
	I0819 17:50:54.044489  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.035230  300803 addons.go:69] Setting volcano=true in profile "addons-726932"
	I0819 17:50:54.061827  300803 addons.go:234] Setting addon volcano=true in "addons-726932"
	I0819 17:50:54.061872  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.062346  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.035234  300803 addons.go:69] Setting volumesnapshots=true in profile "addons-726932"
	I0819 17:50:54.083982  300803 addons.go:234] Setting addon volumesnapshots=true in "addons-726932"
	I0819 17:50:54.084048  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.084567  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.035293  300803 out.go:177] * Verifying Kubernetes components...
	I0819 17:50:54.035302  300803 addons.go:69] Setting ingress=true in profile "addons-726932"
	I0819 17:50:54.139400  300803 addons.go:234] Setting addon ingress=true in "addons-726932"
	I0819 17:50:54.139461  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.139930  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.035306  300803 addons.go:69] Setting cloud-spanner=true in profile "addons-726932"
	I0819 17:50:54.157221  300803 addons.go:234] Setting addon cloud-spanner=true in "addons-726932"
	I0819 17:50:54.157295  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.157949  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.035310  300803 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-726932"
	I0819 17:50:54.178555  300803 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-726932"
	I0819 17:50:54.178641  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.179319  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.035313  300803 addons.go:69] Setting default-storageclass=true in profile "addons-726932"
	I0819 17:50:54.193396  300803 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-726932"
	I0819 17:50:54.193867  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.195102  300803 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0819 17:50:54.035316  300803 addons.go:69] Setting gcp-auth=true in profile "addons-726932"
	I0819 17:50:54.197064  300803 mustload.go:65] Loading cluster: addons-726932
	I0819 17:50:54.197312  300803 config.go:182] Loaded profile config "addons-726932": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 17:50:54.208830  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.214093  300803 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-726932"
	I0819 17:50:54.214147  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.215347  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.197611  300803 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0819 17:50:54.224470  300803 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0819 17:50:54.224561  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.035330  300803 addons.go:69] Setting ingress-dns=true in profile "addons-726932"
	I0819 17:50:54.247538  300803 addons.go:234] Setting addon ingress-dns=true in "addons-726932"
	I0819 17:50:54.247601  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.248064  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.197690  300803 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0819 17:50:54.035327  300803 addons.go:69] Setting inspektor-gadget=true in profile "addons-726932"
	I0819 17:50:54.262087  300803 addons.go:234] Setting addon inspektor-gadget=true in "addons-726932"
	I0819 17:50:54.262140  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.262608  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.269395  300803 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0819 17:50:54.276174  300803 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0819 17:50:54.247367  300803 out.go:177]   - Using image docker.io/registry:2.8.3
	I0819 17:50:54.286210  300803 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:50:54.286232  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0819 17:50:54.286298  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.308206  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0819 17:50:54.313781  300803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0819 17:50:54.313805  300803 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0819 17:50:54.313886  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.322195  300803 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0819 17:50:54.323156  300803 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.22
	I0819 17:50:54.351372  300803 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0819 17:50:54.356521  300803 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0819 17:50:54.356751  300803 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0819 17:50:54.356789  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0819 17:50:54.356891  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.370081  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.351074  300803 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0819 17:50:54.374015  300803 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0819 17:50:54.374114  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.391735  300803 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0819 17:50:54.396483  300803 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 17:50:54.396562  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0819 17:50:54.396665  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.351279  300803 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:50:54.402707  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0819 17:50:54.402808  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.417807  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.422438  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0819 17:50:54.427197  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0819 17:50:54.428904  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0819 17:50:54.351288  300803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0819 17:50:54.431201  300803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:50:54.351292  300803 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0819 17:50:54.446828  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0819 17:50:54.447343  300803 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0819 17:50:54.447359  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0819 17:50:54.447458  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.455758  300803 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0
	I0819 17:50:54.456005  300803 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0819 17:50:54.456012  300803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:50:54.457780  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0819 17:50:54.461318  300803 addons.go:234] Setting addon default-storageclass=true in "addons-726932"
	I0819 17:50:54.461784  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:50:54.462283  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:50:54.464793  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.461333  300803 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0819 17:50:54.468413  300803 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0819 17:50:54.468523  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.476089  300803 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:50:54.476112  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0819 17:50:54.476174  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.500969  300803 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:50:54.500990  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0819 17:50:54.502253  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.506365  300803 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0819 17:50:54.506974  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.513799  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0819 17:50:54.513867  300803 out.go:177]   - Using image docker.io/busybox:stable
	I0819 17:50:54.516062  300803 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:50:54.516085  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0819 17:50:54.516154  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.526061  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0819 17:50:54.530062  300803 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0819 17:50:54.533355  300803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0819 17:50:54.533375  300803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0819 17:50:54.533435  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.560369  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.571894  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.576147  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.601858  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.631965  300803 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0819 17:50:54.631986  300803 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0819 17:50:54.632055  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:50:54.634865  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.639935  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.648068  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.657842  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.683990  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	W0819 17:50:54.693138  300803 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 17:50:54.693230  300803 retry.go:31] will retry after 335.045136ms: ssh: handshake failed: EOF
	I0819 17:50:54.698346  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:50:54.709796  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	W0819 17:50:54.713695  300803 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 17:50:54.713720  300803 retry.go:31] will retry after 186.524852ms: ssh: handshake failed: EOF
	I0819 17:50:54.794239  300803 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W0819 17:50:54.902098  300803 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0819 17:50:54.902175  300803 retry.go:31] will retry after 217.267712ms: ssh: handshake failed: EOF
	I0819 17:50:54.904036  300803 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0819 17:50:55.201552  300803 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0819 17:50:55.201574  300803 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0819 17:50:55.354805  300803 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0819 17:50:55.354878  300803 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0819 17:50:55.355074  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0819 17:50:55.357449  300803 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0819 17:50:55.357507  300803 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0819 17:50:55.402722  300803 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0819 17:50:55.402787  300803 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0819 17:50:55.430322  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0819 17:50:55.457508  300803 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0819 17:50:55.457581  300803 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0819 17:50:55.472250  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0819 17:50:55.474279  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0819 17:50:55.493309  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0819 17:50:55.527659  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0819 17:50:55.553030  300803 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0819 17:50:55.553130  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0819 17:50:55.564545  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0819 17:50:55.590789  300803 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0819 17:50:55.590865  300803 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0819 17:50:55.603254  300803 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0819 17:50:55.603334  300803 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0819 17:50:55.759238  300803 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:50:55.759260  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0819 17:50:55.812971  300803 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0819 17:50:55.812993  300803 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0819 17:50:55.851233  300803 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0819 17:50:55.851277  300803 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0819 17:50:55.893001  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0819 17:50:55.909841  300803 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0819 17:50:55.909914  300803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0819 17:50:55.954906  300803 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:50:55.954970  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0819 17:50:56.052717  300803 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0819 17:50:56.052781  300803 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0819 17:50:56.180320  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0819 17:50:56.281246  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0819 17:50:56.336998  300803 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0819 17:50:56.337023  300803 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0819 17:50:56.493590  300803 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0819 17:50:56.493617  300803 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0819 17:50:56.506068  300803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0819 17:50:56.506103  300803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0819 17:50:56.584169  300803 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:50:56.584196  300803 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0819 17:50:56.657386  300803 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0819 17:50:56.657413  300803 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0819 17:50:56.769640  300803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0819 17:50:56.769681  300803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0819 17:50:56.940870  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0819 17:50:56.972275  300803 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:50:56.972300  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0819 17:50:56.984067  300803 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.189732583s)
	I0819 17:50:56.984097  300803 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0819 17:50:56.985091  300803 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.080997468s)
	I0819 17:50:56.986001  300803 node_ready.go:35] waiting up to 6m0s for node "addons-726932" to be "Ready" ...
	I0819 17:50:56.993478  300803 node_ready.go:49] node "addons-726932" has status "Ready":"True"
	I0819 17:50:56.993505  300803 node_ready.go:38] duration metric: took 7.476709ms for node "addons-726932" to be "Ready" ...
	I0819 17:50:56.993516  300803 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:50:57.009066  300803 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace to be "Ready" ...
	I0819 17:50:57.083860  300803 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0819 17:50:57.083884  300803 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0819 17:50:57.222721  300803 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0819 17:50:57.222748  300803 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0819 17:50:57.245061  300803 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:50:57.245089  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0819 17:50:57.282246  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0819 17:50:57.303557  300803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0819 17:50:57.303587  300803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0819 17:50:57.393085  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:50:57.481135  300803 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0819 17:50:57.481163  300803 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0819 17:50:57.487757  300803 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-726932" context rescaled to 1 replicas
	I0819 17:50:57.722092  300803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0819 17:50:57.722116  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0819 17:50:58.064082  300803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0819 17:50:58.064110  300803 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0819 17:50:58.376823  300803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0819 17:50:58.376847  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0819 17:50:58.585731  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.155322177s)
	I0819 17:50:58.585737  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.230610934s)
	I0819 17:50:58.585780  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.11347021s)
	I0819 17:50:58.752299  300803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0819 17:50:58.752324  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0819 17:50:59.066839  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:50:59.233288  300803 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:50:59.233314  300803 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0819 17:50:59.620556  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0819 17:51:01.555641  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:51:01.584431  300803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0819 17:51:01.584580  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:51:01.614835  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:51:02.118772  300803 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0819 17:51:02.264621  300803 addons.go:234] Setting addon gcp-auth=true in "addons-726932"
	I0819 17:51:02.264678  300803 host.go:66] Checking if "addons-726932" exists ...
	I0819 17:51:02.265171  300803 cli_runner.go:164] Run: docker container inspect addons-726932 --format={{.State.Status}}
	I0819 17:51:02.297776  300803 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0819 17:51:02.297837  300803 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-726932
	I0819 17:51:02.338762  300803 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33141 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/addons-726932/id_rsa Username:docker}
	I0819 17:51:04.043979  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:51:04.862121  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.387762844s)
	I0819 17:51:04.862182  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.368854331s)
	I0819 17:51:04.862387  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.334659766s)
	I0819 17:51:04.862429  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.2978136s)
	I0819 17:51:04.862522  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.969452403s)
	I0819 17:51:04.862536  300803 addons.go:475] Verifying addon ingress=true in "addons-726932"
	I0819 17:51:04.862672  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.682280151s)
	I0819 17:51:04.862690  300803 addons.go:475] Verifying addon registry=true in "addons-726932"
	I0819 17:51:04.862952  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.581670505s)
	I0819 17:51:04.863290  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.922389375s)
	I0819 17:51:04.864843  300803 addons.go:475] Verifying addon metrics-server=true in "addons-726932"
	I0819 17:51:04.863349  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.581073195s)
	I0819 17:51:04.863435  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.47032471s)
	W0819 17:51:04.864898  300803 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:51:04.864914  300803 retry.go:31] will retry after 175.360892ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0819 17:51:04.865610  300803 out.go:177] * Verifying ingress addon...
	I0819 17:51:04.865650  300803 out.go:177] * Verifying registry addon...
	I0819 17:51:04.867780  300803 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-726932 service yakd-dashboard -n yakd-dashboard
	
	I0819 17:51:04.870538  300803 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0819 17:51:04.871481  300803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0819 17:51:04.904630  300803 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0819 17:51:04.904710  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:04.905876  300803 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0819 17:51:04.905893  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:05.041349  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0819 17:51:05.407340  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:05.408728  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:05.801226  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.180614807s)
	I0819 17:51:05.801264  300803 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-726932"
	I0819 17:51:05.801558  300803 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (3.503755474s)
	I0819 17:51:05.803800  300803 out.go:177] * Verifying csi-hostpath-driver addon...
	I0819 17:51:05.803858  300803 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0819 17:51:05.806835  300803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0819 17:51:05.809171  300803 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0819 17:51:05.811932  300803 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0819 17:51:05.811953  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:05.812969  300803 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0819 17:51:05.813022  300803 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0819 17:51:05.875787  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:05.879354  300803 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0819 17:51:05.879418  300803 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0819 17:51:05.880275  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:05.907328  300803 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:51:05.907404  300803 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0819 17:51:05.951921  300803 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0819 17:51:06.312233  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:06.375132  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:06.376569  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:06.517286  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:51:06.677533  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.636059479s)
	I0819 17:51:06.812119  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:06.874927  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:06.877005  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:07.076926  300803 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.124826575s)
	I0819 17:51:07.084777  300803 addons.go:475] Verifying addon gcp-auth=true in "addons-726932"
	I0819 17:51:07.087278  300803 out.go:177] * Verifying gcp-auth addon...
	I0819 17:51:07.089901  300803 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0819 17:51:07.094736  300803 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 17:51:07.311860  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:07.377740  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:07.379103  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:07.813101  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:07.877384  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:07.878618  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:08.311988  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:08.412460  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:08.412907  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:08.811948  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:08.881108  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:08.882663  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:09.017855  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:51:09.313149  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:09.416712  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:09.417436  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:09.811217  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:09.875981  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:09.877418  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:10.311947  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:10.375850  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:10.376841  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:10.812614  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:10.876724  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:10.877173  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:11.316423  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:11.379846  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:11.380485  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:11.517860  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:51:11.812387  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:11.912809  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:11.913011  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:12.312176  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:12.376061  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:12.376606  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:12.812624  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:12.874938  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:12.876719  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:13.311730  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:13.376214  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:13.376820  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:13.812088  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:13.875568  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:13.876732  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:14.018730  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:51:14.312960  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:14.378086  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:14.379481  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:14.812287  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:14.877360  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:14.878894  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:15.312372  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:15.376837  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:15.377773  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:15.811677  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:15.876498  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:15.878926  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:16.312315  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:16.375484  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:16.377081  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:16.515913  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:51:16.813610  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:16.877716  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:16.879407  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:17.312456  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:17.376955  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:17.377999  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:17.812114  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:17.875511  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:17.876659  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:18.312169  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:18.413622  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:18.414111  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:18.517066  300803 pod_ready.go:103] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"False"
	I0819 17:51:18.811686  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:18.878230  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:18.878808  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:19.312055  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:19.412061  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:19.413934  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:19.812239  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:19.875942  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:19.876262  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:20.313019  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:20.375024  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:20.376811  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:20.516618  300803 pod_ready.go:93] pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace has status "Ready":"True"
	I0819 17:51:20.516645  300803 pod_ready.go:82] duration metric: took 23.50747361s for pod "coredns-6f6b679f8f-mxkbc" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.516658  300803 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6f6b679f8f-t5p2x" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.519235  300803 pod_ready.go:98] error getting pod "coredns-6f6b679f8f-t5p2x" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-t5p2x" not found
	I0819 17:51:20.519264  300803 pod_ready.go:82] duration metric: took 2.598449ms for pod "coredns-6f6b679f8f-t5p2x" in "kube-system" namespace to be "Ready" ...
	E0819 17:51:20.519287  300803 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-6f6b679f8f-t5p2x" in "kube-system" namespace (skipping!): pods "coredns-6f6b679f8f-t5p2x" not found
	I0819 17:51:20.519294  300803 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-726932" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.525457  300803 pod_ready.go:93] pod "etcd-addons-726932" in "kube-system" namespace has status "Ready":"True"
	I0819 17:51:20.525484  300803 pod_ready.go:82] duration metric: took 6.182716ms for pod "etcd-addons-726932" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.525499  300803 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-726932" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.543605  300803 pod_ready.go:93] pod "kube-apiserver-addons-726932" in "kube-system" namespace has status "Ready":"True"
	I0819 17:51:20.543634  300803 pod_ready.go:82] duration metric: took 18.127208ms for pod "kube-apiserver-addons-726932" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.543646  300803 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-726932" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.554909  300803 pod_ready.go:93] pod "kube-controller-manager-addons-726932" in "kube-system" namespace has status "Ready":"True"
	I0819 17:51:20.554935  300803 pod_ready.go:82] duration metric: took 11.281274ms for pod "kube-controller-manager-addons-726932" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.554948  300803 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sjmft" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.713332  300803 pod_ready.go:93] pod "kube-proxy-sjmft" in "kube-system" namespace has status "Ready":"True"
	I0819 17:51:20.713358  300803 pod_ready.go:82] duration metric: took 158.402809ms for pod "kube-proxy-sjmft" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.713370  300803 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-726932" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:20.812085  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:20.877076  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:20.878530  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:21.113276  300803 pod_ready.go:93] pod "kube-scheduler-addons-726932" in "kube-system" namespace has status "Ready":"True"
	I0819 17:51:21.113302  300803 pod_ready.go:82] duration metric: took 399.924778ms for pod "kube-scheduler-addons-726932" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:21.113314  300803 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-7lck8" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:21.313122  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:21.376990  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:21.377895  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:21.513727  300803 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-7lck8" in "kube-system" namespace has status "Ready":"True"
	I0819 17:51:21.513804  300803 pod_ready.go:82] duration metric: took 400.48211ms for pod "nvidia-device-plugin-daemonset-7lck8" in "kube-system" namespace to be "Ready" ...
	I0819 17:51:21.513828  300803 pod_ready.go:39] duration metric: took 24.520299452s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0819 17:51:21.513871  300803 api_server.go:52] waiting for apiserver process to appear ...
	I0819 17:51:21.513955  300803 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 17:51:21.531888  300803 api_server.go:72] duration metric: took 27.500847782s to wait for apiserver process to appear ...
	I0819 17:51:21.531959  300803 api_server.go:88] waiting for apiserver healthz status ...
	I0819 17:51:21.531993  300803 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0819 17:51:21.540382  300803 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0819 17:51:21.541451  300803 api_server.go:141] control plane version: v1.31.0
	I0819 17:51:21.541508  300803 api_server.go:131] duration metric: took 9.527959ms to wait for apiserver health ...
	I0819 17:51:21.541531  300803 system_pods.go:43] waiting for kube-system pods to appear ...
	I0819 17:51:21.721611  300803 system_pods.go:59] 18 kube-system pods found
	I0819 17:51:21.721721  300803 system_pods.go:61] "coredns-6f6b679f8f-mxkbc" [16bd54d0-d1d2-424a-9842-d87c9ccd2628] Running
	I0819 17:51:21.721747  300803 system_pods.go:61] "csi-hostpath-attacher-0" [ba71fa33-d71b-4fe7-a180-08a698a09cd1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 17:51:21.721781  300803 system_pods.go:61] "csi-hostpath-resizer-0" [eb4bcffa-b700-4cd6-8de8-010cd5056a1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 17:51:21.721811  300803 system_pods.go:61] "csi-hostpathplugin-26zrk" [76f9e5f4-ef0f-44e0-9fc5-d25c88edfbfb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 17:51:21.721832  300803 system_pods.go:61] "etcd-addons-726932" [6cca2dce-04f7-4840-af5b-e6f42813f2b2] Running
	I0819 17:51:21.721856  300803 system_pods.go:61] "kindnet-4ttxh" [606f2154-7ced-4ff8-9b78-44801567bfc7] Running
	I0819 17:51:21.721886  300803 system_pods.go:61] "kube-apiserver-addons-726932" [512a5f8f-e9b4-4da3-b473-fb3e49c05939] Running
	I0819 17:51:21.721909  300803 system_pods.go:61] "kube-controller-manager-addons-726932" [dbdc95ce-2d23-4906-8f5e-632068ade48b] Running
	I0819 17:51:21.721930  300803 system_pods.go:61] "kube-ingress-dns-minikube" [920c48df-ce13-476e-9e85-97a964e02356] Running
	I0819 17:51:21.721949  300803 system_pods.go:61] "kube-proxy-sjmft" [06c1a1b8-8733-4d7f-b080-2c98f247f0bd] Running
	I0819 17:51:21.721968  300803 system_pods.go:61] "kube-scheduler-addons-726932" [1da0f0e3-5004-43ad-881c-3c87bdb60ab2] Running
	I0819 17:51:21.721997  300803 system_pods.go:61] "metrics-server-8988944d9-s6ptq" [abb7007b-fc7a-424e-85c0-81b157a4003c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 17:51:21.722026  300803 system_pods.go:61] "nvidia-device-plugin-daemonset-7lck8" [85679270-e5c5-4ce9-8cff-f979280cc490] Running
	I0819 17:51:21.722048  300803 system_pods.go:61] "registry-6fb4cdfc84-z929z" [76e69bc0-b08f-45e6-8d0f-01c97394c905] Running
	I0819 17:51:21.722069  300803 system_pods.go:61] "registry-proxy-8b8b8" [28afb619-68cb-472a-86d3-b990fe68326e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 17:51:21.722102  300803 system_pods.go:61] "snapshot-controller-56fcc65765-dfzrj" [9f1cdc33-9adc-46f5-a1e4-cb47b6511ed5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:51:21.722124  300803 system_pods.go:61] "snapshot-controller-56fcc65765-jt7s5" [3cb67e9c-19b8-4bc7-aee5-2ec08220a8de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:51:21.722141  300803 system_pods.go:61] "storage-provisioner" [9e33ab0f-5aa0-4d6b-a1ca-98d57397ebe2] Running
	I0819 17:51:21.722165  300803 system_pods.go:74] duration metric: took 180.611382ms to wait for pod list to return data ...
	I0819 17:51:21.722184  300803 default_sa.go:34] waiting for default service account to be created ...
	I0819 17:51:21.812916  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:21.877038  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:21.878522  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:21.915329  300803 default_sa.go:45] found service account: "default"
	I0819 17:51:21.915400  300803 default_sa.go:55] duration metric: took 193.185575ms for default service account to be created ...
	I0819 17:51:21.915424  300803 system_pods.go:116] waiting for k8s-apps to be running ...
	I0819 17:51:22.123626  300803 system_pods.go:86] 18 kube-system pods found
	I0819 17:51:22.123656  300803 system_pods.go:89] "coredns-6f6b679f8f-mxkbc" [16bd54d0-d1d2-424a-9842-d87c9ccd2628] Running
	I0819 17:51:22.123668  300803 system_pods.go:89] "csi-hostpath-attacher-0" [ba71fa33-d71b-4fe7-a180-08a698a09cd1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0819 17:51:22.123675  300803 system_pods.go:89] "csi-hostpath-resizer-0" [eb4bcffa-b700-4cd6-8de8-010cd5056a1c] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0819 17:51:22.123683  300803 system_pods.go:89] "csi-hostpathplugin-26zrk" [76f9e5f4-ef0f-44e0-9fc5-d25c88edfbfb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0819 17:51:22.123688  300803 system_pods.go:89] "etcd-addons-726932" [6cca2dce-04f7-4840-af5b-e6f42813f2b2] Running
	I0819 17:51:22.123693  300803 system_pods.go:89] "kindnet-4ttxh" [606f2154-7ced-4ff8-9b78-44801567bfc7] Running
	I0819 17:51:22.123703  300803 system_pods.go:89] "kube-apiserver-addons-726932" [512a5f8f-e9b4-4da3-b473-fb3e49c05939] Running
	I0819 17:51:22.123708  300803 system_pods.go:89] "kube-controller-manager-addons-726932" [dbdc95ce-2d23-4906-8f5e-632068ade48b] Running
	I0819 17:51:22.123715  300803 system_pods.go:89] "kube-ingress-dns-minikube" [920c48df-ce13-476e-9e85-97a964e02356] Running
	I0819 17:51:22.123720  300803 system_pods.go:89] "kube-proxy-sjmft" [06c1a1b8-8733-4d7f-b080-2c98f247f0bd] Running
	I0819 17:51:22.123727  300803 system_pods.go:89] "kube-scheduler-addons-726932" [1da0f0e3-5004-43ad-881c-3c87bdb60ab2] Running
	I0819 17:51:22.123733  300803 system_pods.go:89] "metrics-server-8988944d9-s6ptq" [abb7007b-fc7a-424e-85c0-81b157a4003c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0819 17:51:22.123738  300803 system_pods.go:89] "nvidia-device-plugin-daemonset-7lck8" [85679270-e5c5-4ce9-8cff-f979280cc490] Running
	I0819 17:51:22.123745  300803 system_pods.go:89] "registry-6fb4cdfc84-z929z" [76e69bc0-b08f-45e6-8d0f-01c97394c905] Running
	I0819 17:51:22.123751  300803 system_pods.go:89] "registry-proxy-8b8b8" [28afb619-68cb-472a-86d3-b990fe68326e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0819 17:51:22.123758  300803 system_pods.go:89] "snapshot-controller-56fcc65765-dfzrj" [9f1cdc33-9adc-46f5-a1e4-cb47b6511ed5] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:51:22.123769  300803 system_pods.go:89] "snapshot-controller-56fcc65765-jt7s5" [3cb67e9c-19b8-4bc7-aee5-2ec08220a8de] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0819 17:51:22.123775  300803 system_pods.go:89] "storage-provisioner" [9e33ab0f-5aa0-4d6b-a1ca-98d57397ebe2] Running
	I0819 17:51:22.123783  300803 system_pods.go:126] duration metric: took 208.339624ms to wait for k8s-apps to be running ...
	I0819 17:51:22.123796  300803 system_svc.go:44] waiting for kubelet service to be running ....
	I0819 17:51:22.123852  300803 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 17:51:22.137259  300803 system_svc.go:56] duration metric: took 13.454969ms WaitForService to wait for kubelet
	I0819 17:51:22.137290  300803 kubeadm.go:582] duration metric: took 28.106254947s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0819 17:51:22.137310  300803 node_conditions.go:102] verifying NodePressure condition ...
	I0819 17:51:22.313752  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:22.316196  300803 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0819 17:51:22.316228  300803 node_conditions.go:123] node cpu capacity is 2
	I0819 17:51:22.316242  300803 node_conditions.go:105] duration metric: took 178.924496ms to run NodePressure ...
	I0819 17:51:22.316255  300803 start.go:241] waiting for startup goroutines ...
	I0819 17:51:22.411761  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:22.412927  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:22.811639  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:22.874637  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:22.876116  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0819 17:51:23.320295  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:23.375550  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:23.383432  300803 kapi.go:107] duration metric: took 18.511948612s to wait for kubernetes.io/minikube-addons=registry ...
	I0819 17:51:23.812466  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:23.874963  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:24.313018  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:24.375381  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:24.811216  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:24.875613  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:25.312594  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:25.413652  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:25.812243  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:25.875156  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:26.312507  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:26.374656  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:26.811101  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:26.874911  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:27.311803  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:27.375275  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:27.811482  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:27.874602  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:28.312329  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:28.375782  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:28.811592  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:28.877345  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:29.320733  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:29.384273  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:29.813944  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:29.876101  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:30.315958  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:30.382875  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:30.813299  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:30.875223  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:31.318085  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:31.376394  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:31.815526  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:31.875120  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:32.311945  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:32.375952  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:32.812046  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:32.874936  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:33.312998  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:33.375030  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:33.813388  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:33.876520  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:34.312128  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:34.375425  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:34.813022  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:34.913170  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:35.312879  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:35.383863  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:35.813029  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:35.876968  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:36.312131  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:36.375198  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:36.813218  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:36.875764  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:37.312603  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:37.375740  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:37.812312  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:37.875573  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:38.320270  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:38.421799  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:38.812219  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:38.875878  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:39.314767  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:39.376906  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:39.812092  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:39.874929  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:40.313211  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:40.375507  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:40.814055  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:40.875729  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:41.312243  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:41.375447  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:41.812886  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:41.875162  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:42.313253  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:42.375811  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:42.812153  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:42.875502  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:43.315672  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:43.374618  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:43.811906  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:43.875248  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:44.311644  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:44.411917  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:44.812057  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:44.875711  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:45.312053  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:45.377158  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:45.812786  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:45.875043  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:46.311488  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:46.375924  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:46.811877  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:46.876199  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:47.313186  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:47.388703  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:47.813803  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:47.879522  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:48.317034  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:48.417027  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:48.812370  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:48.875413  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:49.313000  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:49.374810  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:49.812001  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:49.898289  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:50.312358  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:50.376116  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:50.812872  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:50.912633  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:51.313375  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:51.375869  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:51.812891  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:51.876087  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:52.311525  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:52.375285  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:52.813739  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:52.874864  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:53.312628  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:53.375383  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:53.812953  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:53.913471  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:54.316239  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:54.375949  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:54.812122  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:54.878930  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:55.311814  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:55.375613  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:55.812466  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:55.875003  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:56.311776  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0819 17:51:56.382292  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:56.812199  300803 kapi.go:107] duration metric: took 51.005358875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0819 17:51:56.875522  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:57.375192  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:57.874668  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:58.374925  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:58.875745  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:59.374588  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:51:59.875609  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:00.388968  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:00.874712  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:01.375398  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:01.874949  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:02.375435  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:02.874762  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:03.375556  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:03.875523  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:04.375043  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:04.875604  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:05.374911  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:05.875355  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:06.375759  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:06.875303  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:07.375595  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:07.875599  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:08.375458  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:08.875969  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:09.375358  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:09.876435  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:10.374892  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:10.875321  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:11.374636  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:11.875681  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:12.374884  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:12.878940  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:13.374707  300803 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0819 17:52:13.875635  300803 kapi.go:107] duration metric: took 1m9.005096563s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0819 17:52:30.116823  300803 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0819 17:52:30.116848  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:30.593630  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:31.093361  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:31.593653  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:32.093312  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:32.593308  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:33.093446  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:33.594099  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:34.094182  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:34.593970  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:35.095567  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:35.592794  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:36.094321  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:36.593343  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:37.094656  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:37.594176  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:38.094570  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:38.593602  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:39.093757  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:39.593493  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:40.094430  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:40.594530  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:41.093791  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:41.593780  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:42.095131  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:42.593209  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:43.095073  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:43.593944  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:44.094404  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:44.593876  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:45.094561  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:45.593243  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:46.094253  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:46.594560  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:47.093959  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:47.593661  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:48.093712  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:48.595768  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:49.093261  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:49.594025  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:50.093824  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:50.594070  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:51.095545  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:51.594023  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:52.093432  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:52.593323  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:53.094294  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:53.593998  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:54.094631  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:54.593443  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:55.094562  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:55.593801  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:56.093645  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:56.593270  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:57.094258  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:57.593633  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:58.094037  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:58.593617  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:59.093311  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:52:59.594040  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:00.100923  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:00.594068  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:01.094146  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:01.593801  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:02.094609  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:02.594141  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:03.094065  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:03.592880  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:04.092977  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:04.593865  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:05.094813  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:05.593416  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:06.093356  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:06.595305  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:07.094336  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:07.593892  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:08.094368  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:08.594415  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:09.094202  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:09.593317  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:10.095826  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:10.594890  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:11.094180  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:11.594761  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:12.093839  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:12.593469  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:13.093850  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:13.593452  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:14.094630  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:14.600226  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:15.095699  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:15.600548  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:16.093285  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:16.597090  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:17.094316  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:17.594825  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:18.094728  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:18.593358  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:19.094121  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:19.594220  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:20.093962  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:20.594021  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:21.093399  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:21.593975  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:22.094411  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:22.593507  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:23.093368  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:23.593951  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:24.093890  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:24.593805  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:25.095169  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:25.593659  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:26.093446  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:26.593479  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:27.094821  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:27.593755  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:28.093869  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:28.593504  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:29.094125  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:29.593429  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:30.095730  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:30.594479  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:31.093332  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:31.593820  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:32.093817  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:32.594023  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:33.093251  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:33.593884  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:34.093249  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:34.594639  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:35.094842  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:35.593993  300803 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0819 17:53:36.094331  300803 kapi.go:107] duration metric: took 2m29.004428786s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0819 17:53:36.096200  300803 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-726932 cluster.
	I0819 17:53:36.099516  300803 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0819 17:53:36.101301  300803 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0819 17:53:36.103465  300803 out.go:177] * Enabled addons: nvidia-device-plugin, ingress-dns, storage-provisioner-rancher, volcano, storage-provisioner, cloud-spanner, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0819 17:53:36.105207  300803 addons.go:510] duration metric: took 2m42.073626008s for enable addons: enabled=[nvidia-device-plugin ingress-dns storage-provisioner-rancher volcano storage-provisioner cloud-spanner metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0819 17:53:36.105277  300803 start.go:246] waiting for cluster config update ...
	I0819 17:53:36.105314  300803 start.go:255] writing updated cluster config ...
	I0819 17:53:36.105651  300803 ssh_runner.go:195] Run: rm -f paused
	I0819 17:53:36.444803  300803 start.go:600] kubectl: 1.31.0, cluster: 1.31.0 (minor skew: 0)
	I0819 17:53:36.447079  300803 out.go:177] * Done! kubectl is now configured to use "addons-726932" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	4347d1afada62       e2d3313f65753       2 minutes ago       Exited              gadget                                   5                   2ea9406475f34       gadget-td8w8
	5adcfa079b521       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   cf6fcb1bc2078       gcp-auth-89d5ffd79-z7hml
	6844913e4ed7c       8b46b1cd48760       4 minutes ago       Running             admission                                0                   4e2976aecfef2       volcano-admission-77d7d48b68-b87q7
	464f9835aa214       289a818c8d9c5       4 minutes ago       Running             controller                               0                   c154b9b7f212f       ingress-nginx-controller-bc57996ff-lr8cm
	ff302dc2d2ec6       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   07158b157365e       csi-hostpathplugin-26zrk
	a96f7ce909075       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   07158b157365e       csi-hostpathplugin-26zrk
	cecc774fd48e7       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   07158b157365e       csi-hostpathplugin-26zrk
	a613041b26ae0       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   07158b157365e       csi-hostpathplugin-26zrk
	69db43dac5c82       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   07158b157365e       csi-hostpathplugin-26zrk
	66557275ca4f1       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   64cdf02ecbdad       volcano-controllers-56675bb4d5-z6zwg
	c76110675144c       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   07158b157365e       csi-hostpathplugin-26zrk
	c8bbb06727b5c       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   c89d1c5df2ce7       volcano-scheduler-576bc46687-n9f8l
	ad9176e5292f6       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   eb94f9ffccef8       csi-hostpath-resizer-0
	f801f4ed33ac6       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   243ad6e04423a       csi-hostpath-attacher-0
	2a7db39fc950a       420193b27261a       5 minutes ago       Exited              patch                                    1                   2f2b831a8134f       ingress-nginx-admission-patch-ptzb4
	6187c2b76323b       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   4869363707a6c       snapshot-controller-56fcc65765-dfzrj
	60caa8bddc0be       420193b27261a       5 minutes ago       Exited              create                                   0                   a993d42c32306       ingress-nginx-admission-create-4txpk
	d8591d51c7444       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   b005a9a7bf252       snapshot-controller-56fcc65765-jt7s5
	89fd461f771d4       77bdba588b953       5 minutes ago       Running             yakd                                     0                   007b1cef64da7       yakd-dashboard-67d98fc6b-fzvn5
	73a8707482f59       95dccb4df54ab       5 minutes ago       Running             metrics-server                           0                   60b3b633020a4       metrics-server-8988944d9-s6ptq
	5d4fafaaaade0       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   7f34383e336a4       local-path-provisioner-86d989889c-rhdfv
	85605982fb9c9       53af6e2c4c343       5 minutes ago       Running             cloud-spanner-emulator                   0                   0d35c85697364       cloud-spanner-emulator-c4bc9b5f8-j4gvk
	eab28b09a2aec       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   8f1d062d06cda       registry-proxy-8b8b8
	7f84a73c0614b       2437cf7621777       5 minutes ago       Running             coredns                                  0                   6b5b1661cbfbf       coredns-6f6b679f8f-mxkbc
	3d50ca82ffe49       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   2d82461f1237f       nvidia-device-plugin-daemonset-7lck8
	1f714bbda1b7a       6fed88f43b276       5 minutes ago       Running             registry                                 0                   6d9ae0cdc91f7       registry-6fb4cdfc84-z929z
	284f714ef4798       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   26ef5a1b9f1e7       kube-ingress-dns-minikube
	e9914672f9ed6       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   a9ba77d01ccb8       storage-provisioner
	84989ea23884b       6a23fa8fd2b78       5 minutes ago       Running             kindnet-cni                              0                   a22d5116da806       kindnet-4ttxh
	0510d784f52a8       71d55d66fd4ee       6 minutes ago       Running             kube-proxy                               0                   6df14cd9092c2       kube-proxy-sjmft
	80c2dc12f3743       27e3830e14027       6 minutes ago       Running             etcd                                     0                   f750f3c820a52       etcd-addons-726932
	ee3dbd7d6c407       fcb0683e6bdbd       6 minutes ago       Running             kube-controller-manager                  0                   16f4228206077       kube-controller-manager-addons-726932
	a2fb98cd36bfc       fbbbd428abb4d       6 minutes ago       Running             kube-scheduler                           0                   2cdefa4107384       kube-scheduler-addons-726932
	cadfa0812a245       cd0f0ae0ec9e0       6 minutes ago       Running             kube-apiserver                           0                   860a759186678       kube-apiserver-addons-726932
	
	
	==> containerd <==
	Aug 19 17:53:48 addons-726932 containerd[817]: time="2024-08-19T17:53:48.818011429Z" level=info msg="RemovePodSandbox \"a87132acd5a6373cc6544dd0ebb7db304da1d9f4b57c80c230840095ff339b7d\" returns successfully"
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.751786973Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\""
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.894015830Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.895815825Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc: active requests=0, bytes read=89"
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.899509015Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" with image id \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\", size \"69907666\" in 147.667781ms"
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.899559591Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.31.0@sha256:6b2f7ac9fe6f547cfa541d9217f03da0d0c4615b561d5455a23d0edbbd607ecc\" returns image reference \"sha256:e2d3313f65753f82428cf312f6e4b9237983de19680bde57ca1c0935cadbe630\""
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.901862854Z" level=info msg="CreateContainer within sandbox \"2ea9406475f34d39e778e0a237041c83c07aa586cef8b8c33ff9817c702aee18\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.924782686Z" level=info msg="CreateContainer within sandbox \"2ea9406475f34d39e778e0a237041c83c07aa586cef8b8c33ff9817c702aee18\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d\""
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.926084671Z" level=info msg="StartContainer for \"4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d\""
	Aug 19 17:54:35 addons-726932 containerd[817]: time="2024-08-19T17:54:35.981430725Z" level=info msg="StartContainer for \"4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d\" returns successfully"
	Aug 19 17:54:37 addons-726932 containerd[817]: time="2024-08-19T17:54:37.302702325Z" level=info msg="shim disconnected" id=4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d namespace=k8s.io
	Aug 19 17:54:37 addons-726932 containerd[817]: time="2024-08-19T17:54:37.302770091Z" level=warning msg="cleaning up after shim disconnected" id=4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d namespace=k8s.io
	Aug 19 17:54:37 addons-726932 containerd[817]: time="2024-08-19T17:54:37.302781077Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Aug 19 17:54:37 addons-726932 containerd[817]: time="2024-08-19T17:54:37.830311828Z" level=info msg="RemoveContainer for \"2892fa6e8aa74bed583895f9c708b155247223d6bee76253ecacf67ff5e079fa\""
	Aug 19 17:54:37 addons-726932 containerd[817]: time="2024-08-19T17:54:37.837644622Z" level=info msg="RemoveContainer for \"2892fa6e8aa74bed583895f9c708b155247223d6bee76253ecacf67ff5e079fa\" returns successfully"
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.822237254Z" level=info msg="RemoveContainer for \"f764e92911f01d79c6c89da870a804d83db4e9578106faf76fa4d8ac276a739a\""
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.828875224Z" level=info msg="RemoveContainer for \"f764e92911f01d79c6c89da870a804d83db4e9578106faf76fa4d8ac276a739a\" returns successfully"
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.830998376Z" level=info msg="StopPodSandbox for \"4ad3bc25a4b3a32118d8c62e94af0e122b8873950c0dba9852f8343b1886bf93\""
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.837846380Z" level=info msg="TearDown network for sandbox \"4ad3bc25a4b3a32118d8c62e94af0e122b8873950c0dba9852f8343b1886bf93\" successfully"
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.837888209Z" level=info msg="StopPodSandbox for \"4ad3bc25a4b3a32118d8c62e94af0e122b8873950c0dba9852f8343b1886bf93\" returns successfully"
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.838498160Z" level=info msg="RemovePodSandbox for \"4ad3bc25a4b3a32118d8c62e94af0e122b8873950c0dba9852f8343b1886bf93\""
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.838547596Z" level=info msg="Forcibly stopping sandbox \"4ad3bc25a4b3a32118d8c62e94af0e122b8873950c0dba9852f8343b1886bf93\""
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.845801186Z" level=info msg="TearDown network for sandbox \"4ad3bc25a4b3a32118d8c62e94af0e122b8873950c0dba9852f8343b1886bf93\" successfully"
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.861940674Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ad3bc25a4b3a32118d8c62e94af0e122b8873950c0dba9852f8343b1886bf93\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Aug 19 17:54:48 addons-726932 containerd[817]: time="2024-08-19T17:54:48.862091664Z" level=info msg="RemovePodSandbox \"4ad3bc25a4b3a32118d8c62e94af0e122b8873950c0dba9852f8343b1886bf93\" returns successfully"
	
	
	==> coredns [7f84a73c0614b526220e45e7ae1dfd1db6d41cff808b288eed9e06f8e57e3004] <==
	[INFO] 10.244.0.4:45569 - 53430 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000236996s
	[INFO] 10.244.0.4:33973 - 62195 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002080469s
	[INFO] 10.244.0.4:33973 - 2046 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001658899s
	[INFO] 10.244.0.4:34164 - 6947 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00010441s
	[INFO] 10.244.0.4:34164 - 34848 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000074379s
	[INFO] 10.244.0.4:42103 - 2461 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000103573s
	[INFO] 10.244.0.4:42103 - 59289 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000051668s
	[INFO] 10.244.0.4:58035 - 62773 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000058666s
	[INFO] 10.244.0.4:58035 - 31290 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000035446s
	[INFO] 10.244.0.4:47413 - 38969 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000042109s
	[INFO] 10.244.0.4:47413 - 53303 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00003205s
	[INFO] 10.244.0.4:42849 - 54987 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001253756s
	[INFO] 10.244.0.4:42849 - 33237 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001027147s
	[INFO] 10.244.0.4:48108 - 45352 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000048993s
	[INFO] 10.244.0.4:48108 - 41511 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000034822s
	[INFO] 10.244.0.24:45807 - 52886 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.008648892s
	[INFO] 10.244.0.24:59835 - 55997 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.008603854s
	[INFO] 10.244.0.24:36565 - 650 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000187576s
	[INFO] 10.244.0.24:49507 - 47437 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000092939s
	[INFO] 10.244.0.24:49754 - 35978 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000207557s
	[INFO] 10.244.0.24:51176 - 57303 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000160114s
	[INFO] 10.244.0.24:46552 - 53288 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003631857s
	[INFO] 10.244.0.24:51042 - 5815 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003825154s
	[INFO] 10.244.0.24:53738 - 24041 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000885847s
	[INFO] 10.244.0.24:49384 - 14143 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.000874968s
	
	
	==> describe nodes <==
	Name:               addons-726932
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-726932
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=3ced979f820d64d411dd5d7b1cb520be3c85a517
	                    minikube.k8s.io/name=addons-726932
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_08_19T17_50_49_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-726932
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-726932"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 19 Aug 2024 17:50:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-726932
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 19 Aug 2024 17:56:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 19 Aug 2024 17:53:52 +0000   Mon, 19 Aug 2024 17:50:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 19 Aug 2024 17:53:52 +0000   Mon, 19 Aug 2024 17:50:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 19 Aug 2024 17:53:52 +0000   Mon, 19 Aug 2024 17:50:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 19 Aug 2024 17:53:52 +0000   Mon, 19 Aug 2024 17:50:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-726932
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022368Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022368Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1f5fe1cba8a42149e6b3489dfce3307
	  System UUID:                6ab0df02-3132-4932-92eb-71bf7bfe1373
	  Boot ID:                    a381e96e-18b6-48f3-b104-0bb488fddf0f
	  Kernel Version:             5.15.0-1067-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.20
	  Kubelet Version:            v1.31.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-c4bc9b5f8-j4gvk      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  gadget                      gadget-td8w8                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m55s
	  gcp-auth                    gcp-auth-89d5ffd79-z7hml                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m25s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-lr8cm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m53s
	  kube-system                 coredns-6f6b679f8f-mxkbc                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m1s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 csi-hostpathplugin-26zrk                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m50s
	  kube-system                 etcd-addons-726932                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m6s
	  kube-system                 kindnet-4ttxh                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m2s
	  kube-system                 kube-apiserver-addons-726932                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-controller-manager-addons-726932       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  kube-system                 kube-proxy-sjmft                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 kube-scheduler-addons-726932                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m6s
	  kube-system                 metrics-server-8988944d9-s6ptq              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m56s
	  kube-system                 nvidia-device-plugin-daemonset-7lck8        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-6fb4cdfc84-z929z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 registry-proxy-8b8b8                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-dfzrj        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 snapshot-controller-56fcc65765-jt7s5        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  local-path-storage          local-path-provisioner-86d989889c-rhdfv     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  volcano-system              volcano-admission-77d7d48b68-b87q7          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  volcano-system              volcano-controllers-56675bb4d5-z6zwg        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  volcano-system              volcano-scheduler-576bc46687-n9f8l          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-fzvn5              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 5m59s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  6m14s (x8 over 6m14s)  kubelet          Node addons-726932 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m14s (x7 over 6m14s)  kubelet          Node addons-726932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m14s (x7 over 6m14s)  kubelet          Node addons-726932 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m7s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m7s                   kubelet          Node addons-726932 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m7s                   kubelet          Node addons-726932 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m7s                   kubelet          Node addons-726932 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m2s                   node-controller  Node addons-726932 event: Registered Node addons-726932 in Controller
	
	
	==> dmesg <==
	[Aug19 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.013915] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.415948] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.046527] systemd[1]: /lib/systemd/system/cloud-init-local.service:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.002405] systemd[1]: /lib/systemd/system/cloud-init.service:19: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.014937] systemd[1]: /lib/systemd/system/cloud-init.target:15: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003748] systemd[1]: /lib/systemd/system/cloud-final.service:9: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.003011] systemd[1]: /lib/systemd/system/cloud-config.service:8: Unknown key name 'ConditionEnvironment' in section 'Unit', ignoring.
	[  +0.593993] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.922293] kauditd_printk_skb: 36 callbacks suppressed
	[Aug19 16:54] hrtimer: interrupt took 20225256 ns
	[Aug19 17:18] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	
	
	==> etcd [80c2dc12f37435b3b14254416766437aaa6dadd291238766f7315dafbb04ab4f] <==
	{"level":"info","ts":"2024-08-19T17:50:42.519997Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-08-19T17:50:42.520111Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T17:50:42.523066Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-08-19T17:50:42.525399Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-08-19T17:50:42.525364Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-08-19T17:50:42.889720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-08-19T17:50:42.889930Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-08-19T17:50:42.890073Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-08-19T17:50:42.890173Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-08-19T17:50:42.890264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T17:50:42.890352Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-08-19T17:50:42.890443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-08-19T17:50:42.895657Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:50:42.897829Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-726932 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-08-19T17:50:42.898169Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-08-19T17:50:42.898300Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-08-19T17:50:42.898454Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:50:42.898625Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:50:42.898747Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-08-19T17:50:42.898841Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:50:42.899800Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:50:42.901697Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-08-19T17:50:42.902525Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-08-19T17:50:42.903507Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-08-19T17:50:42.905936Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [5adcfa079b521bc206aa8aeaca1d3db10bae84cc9a77fb0aa1ecb3cd5af1cc97] <==
	2024/08/19 17:53:35 GCP Auth Webhook started!
	2024/08/19 17:53:52 Ready to marshal response ...
	2024/08/19 17:53:52 Ready to write response ...
	2024/08/19 17:53:53 Ready to marshal response ...
	2024/08/19 17:53:53 Ready to write response ...
	
	
	==> kernel <==
	 17:56:55 up  1:39,  0 users,  load average: 0.17, 1.43, 2.18
	Linux addons-726932 5.15.0-1067-aws #73~20.04.1-Ubuntu SMP Wed Jul 24 17:31:05 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [84989ea23884bdadc8a02d98873da670d7b2c33b448d82292d22211c070c00d2] <==
	I0819 17:55:37.242379       1 main.go:299] handling current node
	I0819 17:55:47.243028       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:55:47.243062       1 main.go:299] handling current node
	W0819 17:55:49.085793       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:55:49.085834       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	W0819 17:55:54.855718       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:55:54.855755       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	I0819 17:55:57.243005       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:55:57.243042       1 main.go:299] handling current node
	I0819 17:56:07.242959       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:56:07.242994       1 main.go:299] handling current node
	I0819 17:56:17.242402       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:56:17.242438       1 main.go:299] handling current node
	W0819 17:56:18.099436       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	E0819 17:56:18.099476       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.NetworkPolicy: failed to list *v1.NetworkPolicy: networkpolicies.networking.k8s.io is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "networkpolicies" in API group "networking.k8s.io" at the cluster scope
	I0819 17:56:27.242127       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:56:27.242165       1 main.go:299] handling current node
	I0819 17:56:37.242613       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:56:37.242651       1 main.go:299] handling current node
	W0819 17:56:38.433456       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	E0819 17:56:38.433494       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "pods" in API group "" at the cluster scope
	I0819 17:56:47.242499       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0819 17:56:47.242532       1 main.go:299] handling current node
	W0819 17:56:53.175087       1 reflector.go:547] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:56:53.175250       1 reflector.go:150] pkg/mod/k8s.io/client-go@v0.30.3/tools/cache/reflector.go:232: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:kindnet" cannot list resource "namespaces" in API group "" at the cluster scope
	
	
	==> kube-apiserver [cadfa0812a24561d045b87e261f11471bf396b5620b12b3b42ae8fab75e266c7] <==
	W0819 17:52:09.107975       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:10.050789       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.246.156:443: connect: connection refused
	E0819 17:52:10.050832       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.246.156:443: connect: connection refused" logger="UnhandledError"
	W0819 17:52:10.052688       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:10.134793       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.246.156:443: connect: connection refused
	E0819 17:52:10.134834       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.246.156:443: connect: connection refused" logger="UnhandledError"
	W0819 17:52:10.136422       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:10.154892       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:11.250915       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:12.311772       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:13.382003       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:14.435084       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:15.460862       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:16.469415       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:17.547923       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:18.650799       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:19.664615       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.110.200.149:443: connect: connection refused
	W0819 17:52:30.034299       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.246.156:443: connect: connection refused
	E0819 17:52:30.034406       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.246.156:443: connect: connection refused" logger="UnhandledError"
	W0819 17:53:10.061482       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.246.156:443: connect: connection refused
	E0819 17:53:10.061524       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.246.156:443: connect: connection refused" logger="UnhandledError"
	W0819 17:53:10.142613       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.109.246.156:443: connect: connection refused
	E0819 17:53:10.142653       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.109.246.156:443: connect: connection refused" logger="UnhandledError"
	I0819 17:53:52.951882       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0819 17:53:53.001213       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [ee3dbd7d6c407e95d6d4b20ef8f5586f6c961a4aab94260d08a595b1141098d5] <==
	I0819 17:53:10.083008       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 17:53:10.086081       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 17:53:10.106331       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 17:53:10.152537       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:10.168942       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:10.178605       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:10.189814       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:11.575587       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 17:53:11.590466       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:12.693740       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:12.731447       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 17:53:13.699371       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:13.708566       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:13.715568       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0819 17:53:13.738803       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 17:53:13.748376       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 17:53:13.756682       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0819 17:53:35.687707       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="11.858432ms"
	I0819 17:53:35.688153       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="273µs"
	I0819 17:53:43.021969       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 17:53:43.024993       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 17:53:43.070784       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0819 17:53:43.071908       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0819 17:53:52.552974       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-726932"
	I0819 17:53:52.673851       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [0510d784f52a8970a32308abc012fbb4aa0610b510fc68f46152413e8099ccd6] <==
	I0819 17:50:55.115187       1 server_linux.go:66] "Using iptables proxy"
	I0819 17:50:55.249361       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0819 17:50:55.249428       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0819 17:50:55.274531       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0819 17:50:55.274605       1 server_linux.go:169] "Using iptables Proxier"
	I0819 17:50:55.276698       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0819 17:50:55.277096       1 server.go:483] "Version info" version="v1.31.0"
	I0819 17:50:55.277128       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0819 17:50:55.278484       1 config.go:197] "Starting service config controller"
	I0819 17:50:55.278519       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0819 17:50:55.278543       1 config.go:104] "Starting endpoint slice config controller"
	I0819 17:50:55.278548       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0819 17:50:55.283572       1 config.go:326] "Starting node config controller"
	I0819 17:50:55.283610       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0819 17:50:55.379074       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0819 17:50:55.379137       1 shared_informer.go:320] Caches are synced for service config
	I0819 17:50:55.384189       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [a2fb98cd36bfca83e97180b4db8782d8d0bbd1f004426870248c040a601b8344] <==
	W0819 17:50:46.960146       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0819 17:50:46.960191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.960306       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0819 17:50:46.960353       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.960477       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0819 17:50:46.961916       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.962185       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0819 17:50:46.962727       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0819 17:50:46.962345       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0819 17:50:46.962877       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.962407       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0819 17:50:46.962980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.962474       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0819 17:50:46.963090       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.962512       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0819 17:50:46.963219       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.962548       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0819 17:50:46.963315       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.962581       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0819 17:50:46.965612       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.962618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0819 17:50:46.965795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0819 17:50:46.962673       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0819 17:50:46.965913       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0819 17:50:47.957712       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Aug 19 17:54:51 addons-726932 kubelet[1486]: E0819 17:54:51.750950    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:54:53 addons-726932 kubelet[1486]: I0819 17:54:53.750335    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8b8b8" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 17:55:05 addons-726932 kubelet[1486]: I0819 17:55:05.749838    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:55:05 addons-726932 kubelet[1486]: E0819 17:55:05.750504    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:55:13 addons-726932 kubelet[1486]: I0819 17:55:13.750478    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-z929z" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 17:55:18 addons-726932 kubelet[1486]: I0819 17:55:18.751080    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:55:18 addons-726932 kubelet[1486]: E0819 17:55:18.751274    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:55:30 addons-726932 kubelet[1486]: I0819 17:55:30.750203    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:55:30 addons-726932 kubelet[1486]: E0819 17:55:30.750399    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:55:45 addons-726932 kubelet[1486]: I0819 17:55:45.750159    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:55:45 addons-726932 kubelet[1486]: E0819 17:55:45.750362    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:55:49 addons-726932 kubelet[1486]: I0819 17:55:49.750538    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7lck8" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 17:55:58 addons-726932 kubelet[1486]: I0819 17:55:58.753328    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:55:58 addons-726932 kubelet[1486]: E0819 17:55:58.753557    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:56:05 addons-726932 kubelet[1486]: I0819 17:56:05.750403    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-8b8b8" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 17:56:13 addons-726932 kubelet[1486]: I0819 17:56:13.750188    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:56:13 addons-726932 kubelet[1486]: E0819 17:56:13.750830    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:56:24 addons-726932 kubelet[1486]: I0819 17:56:24.750010    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:56:24 addons-726932 kubelet[1486]: E0819 17:56:24.750654    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:56:25 addons-726932 kubelet[1486]: I0819 17:56:25.750609    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6fb4cdfc84-z929z" secret="" err="secret \"gcp-auth\" not found"
	Aug 19 17:56:35 addons-726932 kubelet[1486]: I0819 17:56:35.750260    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:56:35 addons-726932 kubelet[1486]: E0819 17:56:35.751008    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:56:46 addons-726932 kubelet[1486]: I0819 17:56:46.750561    1486 scope.go:117] "RemoveContainer" containerID="4347d1afada628f0d74b223af011567771bea8e538dec208e4c1d3b2630ae01d"
	Aug 19 17:56:46 addons-726932 kubelet[1486]: E0819 17:56:46.750808    1486 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-td8w8_gadget(793080cd-5131-4438-ae1e-aa457067350d)\"" pod="gadget/gadget-td8w8" podUID="793080cd-5131-4438-ae1e-aa457067350d"
	Aug 19 17:56:54 addons-726932 kubelet[1486]: I0819 17:56:54.751436    1486 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-7lck8" secret="" err="secret \"gcp-auth\" not found"
	
	
	==> storage-provisioner [e9914672f9ed6ac6511eece42bf63b0d16d2ca9507c9b2136108bee337805988] <==
	I0819 17:51:01.054805       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0819 17:51:01.068928       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0819 17:51:01.068997       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0819 17:51:01.084107       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0819 17:51:01.084307       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-726932_cd57dea7-4cb3-4c36-87bd-eeb550338e7a!
	I0819 17:51:01.084386       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7a342e36-d912-4b02-b4ce-b3d41b9e63e9", APIVersion:"v1", ResourceVersion:"645", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-726932_cd57dea7-4cb3-4c36-87bd-eeb550338e7a became leader
	I0819 17:51:01.185290       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-726932_cd57dea7-4cb3-4c36-87bd-eeb550338e7a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-726932 -n addons-726932
helpers_test.go:261: (dbg) Run:  kubectl --context addons-726932 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-4txpk ingress-nginx-admission-patch-ptzb4 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-726932 describe pod ingress-nginx-admission-create-4txpk ingress-nginx-admission-patch-ptzb4 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-726932 describe pod ingress-nginx-admission-create-4txpk ingress-nginx-admission-patch-ptzb4 test-job-nginx-0: exit status 1 (89.690933ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-4txpk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ptzb4" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-726932 describe pod ingress-nginx-admission-create-4txpk ingress-nginx-admission-patch-ptzb4 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.91s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.18
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.0/json-events 5.42
13 TestDownloadOnly/v1.31.0/preload-exists 0
17 TestDownloadOnly/v1.31.0/LogsDuration 0.21
18 TestDownloadOnly/v1.31.0/DeleteAll 0.34
19 TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.63
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 213.98
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 16.12
34 TestAddons/parallel/Ingress 19.64
35 TestAddons/parallel/InspektorGadget 11.88
36 TestAddons/parallel/MetricsServer 6.8
39 TestAddons/parallel/CSI 37.66
40 TestAddons/parallel/Headlamp 15.73
41 TestAddons/parallel/CloudSpanner 5.6
42 TestAddons/parallel/LocalPath 51.86
43 TestAddons/parallel/NvidiaDevicePlugin 6.57
44 TestAddons/parallel/Yakd 11.84
45 TestAddons/StoppedEnableDisable 12.24
46 TestCertOptions 37
47 TestCertExpiration 229.46
49 TestForceSystemdFlag 45.5
50 TestForceSystemdEnv 38.33
51 TestDockerEnvContainerd 45.89
56 TestErrorSpam/setup 31.23
57 TestErrorSpam/start 0.8
58 TestErrorSpam/status 1.03
59 TestErrorSpam/pause 1.78
60 TestErrorSpam/unpause 1.85
61 TestErrorSpam/stop 1.46
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 59.43
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.01
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.16
73 TestFunctional/serial/CacheCmd/cache/add_local 1.26
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.06
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
81 TestFunctional/serial/ExtraConfig 43.88
82 TestFunctional/serial/ComponentHealth 0.1
83 TestFunctional/serial/LogsCmd 1.71
84 TestFunctional/serial/LogsFileCmd 1.78
85 TestFunctional/serial/InvalidService 4.91
87 TestFunctional/parallel/ConfigCmd 0.44
88 TestFunctional/parallel/DashboardCmd 11.21
89 TestFunctional/parallel/DryRun 0.41
90 TestFunctional/parallel/InternationalLanguage 0.21
91 TestFunctional/parallel/StatusCmd 1.1
95 TestFunctional/parallel/ServiceCmdConnect 11.66
96 TestFunctional/parallel/AddonsCmd 0.27
97 TestFunctional/parallel/PersistentVolumeClaim 24.16
99 TestFunctional/parallel/SSHCmd 0.68
100 TestFunctional/parallel/CpCmd 2.02
102 TestFunctional/parallel/FileSync 0.33
103 TestFunctional/parallel/CertSync 2.13
107 TestFunctional/parallel/NodeLabels 0.12
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.76
111 TestFunctional/parallel/License 0.31
112 TestFunctional/parallel/Version/short 0.08
113 TestFunctional/parallel/Version/components 1.19
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.32
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.25
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.26
119 TestFunctional/parallel/ImageCommands/Setup 0.73
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.47
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.43
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
126 TestFunctional/parallel/ProfileCmd/profile_list 0.46
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.69
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.47
130 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
132 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.44
135 TestFunctional/parallel/ImageCommands/ImageRemove 0.69
136 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
145 TestFunctional/parallel/MountCmd/any-port 7.45
146 TestFunctional/parallel/ServiceCmd/List 0.6
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.6
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
149 TestFunctional/parallel/ServiceCmd/Format 0.39
150 TestFunctional/parallel/ServiceCmd/URL 0.43
151 TestFunctional/parallel/MountCmd/specific-port 2.35
152 TestFunctional/parallel/MountCmd/VerifyCleanup 2.7
153 TestFunctional/delete_echo-server_images 0.04
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
159 TestMultiControlPlane/serial/StartCluster 112.84
160 TestMultiControlPlane/serial/DeployApp 31.51
161 TestMultiControlPlane/serial/PingHostFromPods 1.69
162 TestMultiControlPlane/serial/AddWorkerNode 22.46
163 TestMultiControlPlane/serial/NodeLabels 0.12
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
165 TestMultiControlPlane/serial/CopyFile 19.11
166 TestMultiControlPlane/serial/StopSecondaryNode 12.91
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.75
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.77
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 142.05
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.66
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
173 TestMultiControlPlane/serial/StopCluster 36.07
174 TestMultiControlPlane/serial/RestartCluster 77.18
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.64
176 TestMultiControlPlane/serial/AddSecondaryNode 48.16
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.8
181 TestJSONOutput/start/Command 61.54
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.76
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.68
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.77
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 39.52
207 TestKicCustomNetwork/use_default_bridge_network 33.5
208 TestKicExistingNetwork 34.27
209 TestKicCustomSubnet 33.67
210 TestKicStaticIP 35.63
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 68.57
215 TestMountStart/serial/StartWithMountFirst 6.57
216 TestMountStart/serial/VerifyMountFirst 0.27
217 TestMountStart/serial/StartWithMountSecond 9.11
218 TestMountStart/serial/VerifyMountSecond 0.26
219 TestMountStart/serial/DeleteFirst 1.59
220 TestMountStart/serial/VerifyMountPostDelete 0.27
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.59
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 77.29
227 TestMultiNode/serial/DeployApp2Nodes 17.46
228 TestMultiNode/serial/PingHostFrom2Pods 0.97
229 TestMultiNode/serial/AddNode 15.68
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.32
232 TestMultiNode/serial/CopyFile 10.01
233 TestMultiNode/serial/StopNode 2.26
234 TestMultiNode/serial/StartAfterStop 9.58
235 TestMultiNode/serial/RestartKeepsNodes 89.71
236 TestMultiNode/serial/DeleteNode 5.56
237 TestMultiNode/serial/StopMultiNode 24.02
238 TestMultiNode/serial/RestartMultiNode 52.39
239 TestMultiNode/serial/ValidateNameConflict 35.69
244 TestPreload 119.95
246 TestScheduledStopUnix 104.07
249 TestInsufficientStorage 13
250 TestRunningBinaryUpgrade 86.48
252 TestKubernetesUpgrade 356.61
253 TestMissingContainerUpgrade 118.54
255 TestPause/serial/Start 61.65
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
258 TestNoKubernetes/serial/StartWithK8s 40.95
259 TestNoKubernetes/serial/StartWithStopK8s 17.95
260 TestNoKubernetes/serial/Start 6.76
261 TestPause/serial/SecondStartNoReconfiguration 6.95
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
263 TestNoKubernetes/serial/ProfileList 1.21
264 TestNoKubernetes/serial/Stop 1.26
265 TestNoKubernetes/serial/StartNoArgs 7.12
266 TestPause/serial/Pause 1.07
267 TestPause/serial/VerifyStatus 0.44
268 TestPause/serial/Unpause 0.7
269 TestPause/serial/PauseAgain 0.86
270 TestPause/serial/DeletePaused 2.7
271 TestPause/serial/VerifyDeletedResources 0.45
275 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
280 TestNetworkPlugins/group/false 5.86
284 TestStoppedBinaryUpgrade/Setup 1.89
285 TestStoppedBinaryUpgrade/Upgrade 147.8
286 TestStoppedBinaryUpgrade/MinikubeLogs 1.2
294 TestNetworkPlugins/group/auto/Start 56.61
295 TestNetworkPlugins/group/auto/KubeletFlags 0.3
296 TestNetworkPlugins/group/auto/NetCatPod 9.38
297 TestNetworkPlugins/group/auto/DNS 0.18
298 TestNetworkPlugins/group/auto/Localhost 0.14
299 TestNetworkPlugins/group/auto/HairPin 0.15
300 TestNetworkPlugins/group/kindnet/Start 54.66
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.33
303 TestNetworkPlugins/group/kindnet/NetCatPod 11.33
304 TestNetworkPlugins/group/kindnet/DNS 0.24
305 TestNetworkPlugins/group/kindnet/Localhost 0.24
306 TestNetworkPlugins/group/kindnet/HairPin 0.39
307 TestNetworkPlugins/group/calico/Start 67.54
308 TestNetworkPlugins/group/custom-flannel/Start 60.17
309 TestNetworkPlugins/group/calico/ControllerPod 6.01
310 TestNetworkPlugins/group/calico/KubeletFlags 0.29
311 TestNetworkPlugins/group/calico/NetCatPod 10.29
312 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
313 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
314 TestNetworkPlugins/group/calico/DNS 0.2
315 TestNetworkPlugins/group/calico/Localhost 0.16
316 TestNetworkPlugins/group/calico/HairPin 0.15
317 TestNetworkPlugins/group/custom-flannel/DNS 0.19
318 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
319 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
320 TestNetworkPlugins/group/enable-default-cni/Start 78.24
321 TestNetworkPlugins/group/flannel/Start 59.21
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
324 TestNetworkPlugins/group/flannel/NetCatPod 10.26
325 TestNetworkPlugins/group/flannel/DNS 0.2
326 TestNetworkPlugins/group/flannel/Localhost 0.16
327 TestNetworkPlugins/group/flannel/HairPin 0.19
328 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
329 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.27
330 TestNetworkPlugins/group/enable-default-cni/DNS 0.29
331 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
332 TestNetworkPlugins/group/enable-default-cni/HairPin 0.24
333 TestNetworkPlugins/group/bridge/Start 81.62
335 TestStartStop/group/old-k8s-version/serial/FirstStart 172.02
336 TestNetworkPlugins/group/bridge/KubeletFlags 0.36
337 TestNetworkPlugins/group/bridge/NetCatPod 8.35
338 TestNetworkPlugins/group/bridge/DNS 0.18
339 TestNetworkPlugins/group/bridge/Localhost 0.18
340 TestNetworkPlugins/group/bridge/HairPin 0.17
342 TestStartStop/group/no-preload/serial/FirstStart 70.85
343 TestStartStop/group/no-preload/serial/DeployApp 9.42
344 TestStartStop/group/old-k8s-version/serial/DeployApp 8.79
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.17
346 TestStartStop/group/no-preload/serial/Stop 12.29
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.23
348 TestStartStop/group/old-k8s-version/serial/Stop 12.04
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
350 TestStartStop/group/no-preload/serial/SecondStart 302.73
351 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
352 TestStartStop/group/old-k8s-version/serial/SecondStart 304.58
353 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
354 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
356 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
357 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
358 TestStartStop/group/no-preload/serial/Pause 3.11
359 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
360 TestStartStop/group/old-k8s-version/serial/Pause 3.86
362 TestStartStop/group/embed-certs/serial/FirstStart 57.1
364 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 59.19
365 TestStartStop/group/embed-certs/serial/DeployApp 9.34
366 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.11
367 TestStartStop/group/embed-certs/serial/Stop 12.09
368 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.43
369 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.08
370 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.33
371 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.33
372 TestStartStop/group/embed-certs/serial/SecondStart 266.77
373 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.3
374 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 272
375 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
376 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
377 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.28
378 TestStartStop/group/embed-certs/serial/Pause 3.59
379 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
381 TestStartStop/group/newest-cni/serial/FirstStart 36.25
382 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
383 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
384 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.11
385 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.28
387 TestStartStop/group/newest-cni/serial/Stop 1.22
388 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
389 TestStartStop/group/newest-cni/serial/SecondStart 15.84
390 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
392 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
393 TestStartStop/group/newest-cni/serial/Pause 2.96
x
+
TestDownloadOnly/v1.20.0/json-events (10.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-731537 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-731537 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.18087192s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-731537
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-731537: exit status 85 (72.228147ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-731537 | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC |          |
	|         | -p download-only-731537        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:49:44
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:49:44.184086  300025 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:49:44.184381  300025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:49:44.184414  300025 out.go:358] Setting ErrFile to fd 2...
	I0819 17:49:44.184470  300025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:49:44.184820  300025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	W0819 17:49:44.185017  300025 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19478-294620/.minikube/config/config.json: open /home/jenkins/minikube-integration/19478-294620/.minikube/config/config.json: no such file or directory
	I0819 17:49:44.185602  300025 out.go:352] Setting JSON to true
	I0819 17:49:44.186674  300025 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5525,"bootTime":1724084260,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0819 17:49:44.186800  300025 start.go:139] virtualization:  
	I0819 17:49:44.190618  300025 out.go:97] [download-only-731537] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 17:49:44.190905  300025 notify.go:220] Checking for updates...
	W0819 17:49:44.190863  300025 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball: no such file or directory
	I0819 17:49:44.193234  300025 out.go:169] MINIKUBE_LOCATION=19478
	I0819 17:49:44.196172  300025 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:49:44.198436  300025 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	I0819 17:49:44.200570  300025 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	I0819 17:49:44.202506  300025 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 17:49:44.207023  300025 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 17:49:44.207305  300025 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:49:44.235298  300025 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:49:44.235407  300025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:49:44.301864  300025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 17:49:44.2922035 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarch
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors
:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:49:44.301969  300025 docker.go:307] overlay module found
	I0819 17:49:44.304445  300025 out.go:97] Using the docker driver based on user configuration
	I0819 17:49:44.304483  300025 start.go:297] selected driver: docker
	I0819 17:49:44.304491  300025 start.go:901] validating driver "docker" against <nil>
	I0819 17:49:44.304607  300025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:49:44.361903  300025 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 17:49:44.35294815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:49:44.362069  300025 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:49:44.362347  300025 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 17:49:44.362511  300025 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:49:44.364727  300025 out.go:169] Using Docker driver with root privileges
	I0819 17:49:44.366699  300025 cni.go:84] Creating CNI manager for ""
	I0819 17:49:44.366717  300025 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 17:49:44.366726  300025 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:49:44.366806  300025 start.go:340] cluster config:
	{Name:download-only-731537 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-731537 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:49:44.368958  300025 out.go:97] Starting "download-only-731537" primary control-plane node in "download-only-731537" cluster
	I0819 17:49:44.368979  300025 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 17:49:44.371103  300025 out.go:97] Pulling base image v0.0.44-1724062045-19478 ...
	I0819 17:49:44.371128  300025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 17:49:44.371293  300025 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local docker daemon
	I0819 17:49:44.386290  300025 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:49:44.386940  300025 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory
	I0819 17:49:44.387079  300025 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:49:44.424766  300025 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 17:49:44.424806  300025 cache.go:56] Caching tarball of preloaded images
	I0819 17:49:44.424977  300025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 17:49:44.427649  300025 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0819 17:49:44.427678  300025 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 17:49:44.519444  300025 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0819 17:49:48.000560  300025 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b as a tarball
	I0819 17:49:49.117793  300025 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 17:49:49.117903  300025 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 17:49:50.258549  300025 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0819 17:49:50.258932  300025 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/download-only-731537/config.json ...
	I0819 17:49:50.258969  300025 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/download-only-731537/config.json: {Name:mka82c8fca10b36efdfc5db356d83d84ffb34190 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:49:50.259173  300025 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0819 17:49:50.259363  300025 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19478-294620/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-731537 host does not exist
	  To start a cluster, run: "minikube start -p download-only-731537"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-731537
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/json-events (5.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-221522 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-221522 --force --alsologtostderr --kubernetes-version=v1.31.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.422262335s)
--- PASS: TestDownloadOnly/v1.31.0/json-events (5.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/LogsDuration (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-221522
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-221522: exit status 85 (207.349888ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-731537 | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC |                     |
	|         | -p download-only-731537        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC | 19 Aug 24 17:49 UTC |
	| delete  | -p download-only-731537        | download-only-731537 | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC | 19 Aug 24 17:49 UTC |
	| start   | -o=json --download-only        | download-only-221522 | jenkins | v1.33.1 | 19 Aug 24 17:49 UTC |                     |
	|         | -p download-only-221522        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/08/19 17:49:54
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.22.5 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0819 17:49:54.775343  300233 out.go:345] Setting OutFile to fd 1 ...
	I0819 17:49:54.775481  300233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:49:54.775493  300233 out.go:358] Setting ErrFile to fd 2...
	I0819 17:49:54.775498  300233 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 17:49:54.775728  300233 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 17:49:54.776140  300233 out.go:352] Setting JSON to true
	I0819 17:49:54.777014  300233 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5535,"bootTime":1724084260,"procs":168,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0819 17:49:54.777085  300233 start.go:139] virtualization:  
	I0819 17:49:54.779550  300233 out.go:97] [download-only-221522] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 17:49:54.779846  300233 notify.go:220] Checking for updates...
	I0819 17:49:54.782055  300233 out.go:169] MINIKUBE_LOCATION=19478
	I0819 17:49:54.783949  300233 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 17:49:54.785983  300233 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	I0819 17:49:54.788978  300233 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	I0819 17:49:54.791458  300233 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0819 17:49:54.796257  300233 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0819 17:49:54.796547  300233 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 17:49:54.822423  300233 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 17:49:54.822527  300233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:49:54.878421  300233 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:49:54.869228182 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:49:54.878532  300233 docker.go:307] overlay module found
	I0819 17:49:54.880529  300233 out.go:97] Using the docker driver based on user configuration
	I0819 17:49:54.880555  300233 start.go:297] selected driver: docker
	I0819 17:49:54.880562  300233 start.go:901] validating driver "docker" against <nil>
	I0819 17:49:54.880675  300233 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 17:49:54.934535  300233 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-08-19 17:49:54.925788233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 17:49:54.934698  300233 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0819 17:49:54.935034  300233 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0819 17:49:54.935197  300233 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0819 17:49:54.937813  300233 out.go:169] Using Docker driver with root privileges
	I0819 17:49:54.940027  300233 cni.go:84] Creating CNI manager for ""
	I0819 17:49:54.940053  300233 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0819 17:49:54.940079  300233 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0819 17:49:54.940163  300233 start.go:340] cluster config:
	{Name:download-only-221522 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:download-only-221522 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 17:49:54.942569  300233 out.go:97] Starting "download-only-221522" primary control-plane node in "download-only-221522" cluster
	I0819 17:49:54.942608  300233 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0819 17:49:54.944621  300233 out.go:97] Pulling base image v0.0.44-1724062045-19478 ...
	I0819 17:49:54.944652  300233 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 17:49:54.944706  300233 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local docker daemon
	I0819 17:49:54.959506  300233 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b to local cache
	I0819 17:49:54.959628  300233 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory
	I0819 17:49:54.959657  300233 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b in local cache directory, skipping pull
	I0819 17:49:54.959662  300233 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b exists in cache, skipping pull
	I0819 17:49:54.959670  300233 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b as a tarball
	I0819 17:49:55.001113  300233 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 17:49:55.001152  300233 cache.go:56] Caching tarball of preloaded images
	I0819 17:49:55.001323  300233 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 17:49:55.003933  300233 out.go:97] Downloading Kubernetes v1.31.0 preload ...
	I0819 17:49:55.003959  300233 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 17:49:55.077293  300233 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:ea65ad5fd42227e06b9323ff45647208 -> /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4
	I0819 17:49:58.448953  300233 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 17:49:58.449059  300233 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19478-294620/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-containerd-overlay2-arm64.tar.lz4 ...
	I0819 17:49:59.303449  300233 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0 on containerd
	I0819 17:49:59.303819  300233 profile.go:143] Saving config to /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/download-only-221522/config.json ...
	I0819 17:49:59.303854  300233 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/download-only-221522/config.json: {Name:mk2024d72c051314a3ae86afc56995f0e2929793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0819 17:49:59.304523  300233 preload.go:131] Checking if preload exists for k8s version v1.31.0 and runtime containerd
	I0819 17:49:59.304684  300233 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19478-294620/.minikube/cache/linux/arm64/v1.31.0/kubectl
	
	
	* The control-plane node download-only-221522 host does not exist
	  To start a cluster, run: "minikube start -p download-only-221522"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0/LogsDuration (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAll (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.0/DeleteAll (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-221522
--- PASS: TestDownloadOnly/v1.31.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.63s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-936896 --alsologtostderr --binary-mirror http://127.0.0.1:38531 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-936896" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-936896
--- PASS: TestBinaryMirror (0.63s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-726932
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-726932: exit status 85 (59.00509ms)

                                                
                                                
-- stdout --
	* Profile "addons-726932" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-726932"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-726932
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-726932: exit status 85 (80.278253ms)

                                                
                                                
-- stdout --
	* Profile "addons-726932" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-726932"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (213.98s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-726932 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-726932 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m33.981333886s)
--- PASS: TestAddons/Setup (213.98s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-726932 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-726932 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 2.473396ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6fb4cdfc84-z929z" [76e69bc0-b08f-45e6-8d0f-01c97394c905] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003645018s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-8b8b8" [28afb619-68cb-472a-86d3-b990fe68326e] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.00801954s
addons_test.go:342: (dbg) Run:  kubectl --context addons-726932 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-726932 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-726932 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.016055223s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 ip
2024/08/19 17:57:30 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.12s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-726932 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-726932 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-726932 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [6a3c00a1-0266-4d9b-9fba-107bc9346ed7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [6a3c00a1-0266-4d9b-9fba-107bc9346ed7] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004164046s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-726932 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-726932 addons disable ingress --alsologtostderr -v=1: (7.92116388s)
--- PASS: TestAddons/parallel/Ingress (19.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-td8w8" [793080cd-5131-4438-ae1e-aa457067350d] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003847498s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-726932
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-726932: (5.879689757s)
--- PASS: TestAddons/parallel/InspektorGadget (11.88s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.8s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.832172ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-8988944d9-s6ptq" [abb7007b-fc7a-424e-85c0-81b157a4003c] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00399851s
addons_test.go:417: (dbg) Run:  kubectl --context addons-726932 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.80s)

                                                
                                    
x
+
TestAddons/parallel/CSI (37.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 6.042805ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-726932 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-726932 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [1fbd6db9-0064-43f8-b097-626e74cb7354] Pending
helpers_test.go:344: "task-pv-pod" [1fbd6db9-0064-43f8-b097-626e74cb7354] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [1fbd6db9-0064-43f8-b097-626e74cb7354] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003848396s
addons_test.go:590: (dbg) Run:  kubectl --context addons-726932 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-726932 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-726932 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-726932 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-726932 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-726932 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-726932 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [48d6eef3-8992-4690-b8b7-52b1e3abcd30] Pending
helpers_test.go:344: "task-pv-pod-restore" [48d6eef3-8992-4690-b8b7-52b1e3abcd30] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [48d6eef3-8992-4690-b8b7-52b1e3abcd30] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004361185s
addons_test.go:632: (dbg) Run:  kubectl --context addons-726932 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-726932 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-726932 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-726932 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.830155002s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (37.66s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.73s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-726932 --alsologtostderr -v=1
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-nj7lt" [0388f9fa-7520-4005-8003-b807755c68c1] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-nj7lt" [0388f9fa-7520-4005-8003-b807755c68c1] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-nj7lt" [0388f9fa-7520-4005-8003-b807755c68c1] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003761338s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-726932 addons disable headlamp --alsologtostderr -v=1: (5.766165602s)
--- PASS: TestAddons/parallel/Headlamp (15.73s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-c4bc9b5f8-j4gvk" [437417de-03ae-45a7-9ca4-a28bc3357134] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004498902s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-726932
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.86s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-726932 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-726932 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-726932 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [80a2f4d9-8b37-4c07-917b-708a3cd3f0dd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [80a2f4d9-8b37-4c07-917b-708a3cd3f0dd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [80a2f4d9-8b37-4c07-917b-708a3cd3f0dd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003729502s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-726932 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 ssh "cat /opt/local-path-provisioner/pvc-e1cbe39d-5255-48cb-b63b-0acb1b8a8a32_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-726932 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-726932 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-726932 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.634728416s)
--- PASS: TestAddons/parallel/LocalPath (51.86s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-7lck8" [85679270-e5c5-4ce9-8cff-f979280cc490] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003584915s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-726932
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.57s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-fzvn5" [cc87fa52-e8c2-484c-b143-535bc80cfe42] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004583238s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-726932 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-726932 addons disable yakd --alsologtostderr -v=1: (5.830274711s)
--- PASS: TestAddons/parallel/Yakd (11.84s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.24s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-726932
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-726932: (11.992319416s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-726932
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-726932
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-726932
--- PASS: TestAddons/StoppedEnableDisable (12.24s)

                                                
                                    
x
+
TestCertOptions (37s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-240396 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-240396 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.393139166s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-240396 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-240396 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-240396 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-240396" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-240396
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-240396: (1.944008555s)
--- PASS: TestCertOptions (37.00s)

                                                
                                    
x
+
TestCertExpiration (229.46s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-413328 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-413328 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.274328441s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-413328 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-413328 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (6.956331058s)
helpers_test.go:175: Cleaning up "cert-expiration-413328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-413328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-413328: (2.227449239s)
--- PASS: TestCertExpiration (229.46s)

                                                
                                    
x
+
TestForceSystemdFlag (45.5s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-048931 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-048931 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (42.634916932s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-048931 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-048931" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-048931
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-048931: (2.375386187s)
--- PASS: TestForceSystemdFlag (45.50s)

                                                
                                    
x
+
TestForceSystemdEnv (38.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-759289 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-759289 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (35.715144328s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-759289 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-759289" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-759289
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-759289: (2.232065505s)
--- PASS: TestForceSystemdEnv (38.33s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.89s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-411479 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-411479 --driver=docker  --container-runtime=containerd: (30.446671827s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-411479"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tEvnnU6zSXcG/agent.319289" SSH_AGENT_PID="319290" DOCKER_HOST=ssh://docker@127.0.0.1:33146 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tEvnnU6zSXcG/agent.319289" SSH_AGENT_PID="319290" DOCKER_HOST=ssh://docker@127.0.0.1:33146 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tEvnnU6zSXcG/agent.319289" SSH_AGENT_PID="319290" DOCKER_HOST=ssh://docker@127.0.0.1:33146 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.06179051s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-tEvnnU6zSXcG/agent.319289" SSH_AGENT_PID="319290" DOCKER_HOST=ssh://docker@127.0.0.1:33146 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-411479" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-411479
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-411479: (1.950653766s)
--- PASS: TestDockerEnvContainerd (45.89s)

                                                
                                    
x
+
TestErrorSpam/setup (31.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-663652 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-663652 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-663652 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-663652 --driver=docker  --container-runtime=containerd: (31.225880111s)
--- PASS: TestErrorSpam/setup (31.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 status
--- PASS: TestErrorSpam/status (1.03s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.85s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 unpause
--- PASS: TestErrorSpam/unpause (1.85s)

                                                
                                    
x
+
TestErrorSpam/stop (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 stop: (1.275882158s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-663652 --log_dir /tmp/nospam-663652 stop
--- PASS: TestErrorSpam/stop (1.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19478-294620/.minikube/files/etc/test/nested/copy/300020/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (59.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-557654 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-557654 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (59.430829029s)
--- PASS: TestFunctional/serial/StartWithProxy (59.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.01s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-557654 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-557654 --alsologtostderr -v=8: (6.005685873s)
functional_test.go:663: soft start took 6.006987143s for "functional-557654" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.01s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-557654 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 cache add registry.k8s.io/pause:3.1: (1.523124605s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 cache add registry.k8s.io/pause:3.3: (1.413460901s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 cache add registry.k8s.io/pause:latest: (1.221484469s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.16s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-557654 /tmp/TestFunctionalserialCacheCmdcacheadd_local2314206633/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cache add minikube-local-cache-test:functional-557654
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cache delete minikube-local-cache-test:functional-557654
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-557654
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (294.504597ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 cache reload: (1.16069329s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 kubectl -- --context functional-557654 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-557654 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.88s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-557654 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-557654 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.881466221s)
functional_test.go:761: restart took 43.881637543s for "functional-557654" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.88s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-557654 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 logs: (1.709627018s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 logs --file /tmp/TestFunctionalserialLogsFileCmd3626717978/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 logs --file /tmp/TestFunctionalserialLogsFileCmd3626717978/001/logs.txt: (1.777096626s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.78s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.91s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-557654 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-557654
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-557654: exit status 115 (678.302713ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30506 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-557654 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.91s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 config get cpus: exit status 14 (82.312929ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 config get cpus: exit status 14 (62.761985ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-557654 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-557654 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 335364: os: process already finished
E0819 18:03:37.776305  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/DashboardCmd (11.21s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-557654 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-557654 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (192.098485ms)

                                                
                                                
-- stdout --
	* [functional-557654] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:03:21.248205  333914 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:03:21.248388  333914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:03:21.248416  333914 out.go:358] Setting ErrFile to fd 2...
	I0819 18:03:21.248438  333914 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:03:21.248685  333914 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 18:03:21.249082  333914 out.go:352] Setting JSON to false
	I0819 18:03:21.250122  333914 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6342,"bootTime":1724084260,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0819 18:03:21.250230  333914 start.go:139] virtualization:  
	I0819 18:03:21.252999  333914 out.go:177] * [functional-557654] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 18:03:21.255520  333914 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:03:21.255644  333914 notify.go:220] Checking for updates...
	I0819 18:03:21.261038  333914 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:03:21.268261  333914 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	I0819 18:03:21.270322  333914 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	I0819 18:03:21.272451  333914 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 18:03:21.274479  333914 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:03:21.277166  333914 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 18:03:21.277738  333914 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:03:21.303172  333914 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:03:21.303299  333914 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:03:21.370197  333914 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 18:03:21.360451544 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:03:21.370317  333914 docker.go:307] overlay module found
	I0819 18:03:21.373595  333914 out.go:177] * Using the docker driver based on existing profile
	I0819 18:03:21.375288  333914 start.go:297] selected driver: docker
	I0819 18:03:21.375307  333914 start.go:901] validating driver "docker" against &{Name:functional-557654 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-557654 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:03:21.375433  333914 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:03:21.378118  333914 out.go:201] 
	W0819 18:03:21.380144  333914 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0819 18:03:21.382040  333914 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-557654 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-557654 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-557654 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (209.857113ms)

                                                
                                                
-- stdout --
	* [functional-557654] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:03:26.470868  335016 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:03:26.471074  335016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:03:26.471084  335016 out.go:358] Setting ErrFile to fd 2...
	I0819 18:03:26.471090  335016 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:03:26.471973  335016 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 18:03:26.472393  335016 out.go:352] Setting JSON to false
	I0819 18:03:26.473406  335016 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":6347,"bootTime":1724084260,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0819 18:03:26.473486  335016 start.go:139] virtualization:  
	I0819 18:03:26.477498  335016 out.go:177] * [functional-557654] minikube v1.33.1 sur Ubuntu 20.04 (arm64)
	I0819 18:03:26.480007  335016 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:03:26.480046  335016 notify.go:220] Checking for updates...
	I0819 18:03:26.484911  335016 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:03:26.487147  335016 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	I0819 18:03:26.489264  335016 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	I0819 18:03:26.491212  335016 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 18:03:26.493375  335016 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:03:26.495810  335016 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 18:03:26.496377  335016 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:03:26.518920  335016 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:03:26.519044  335016 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:03:26.602477  335016 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-08-19 18:03:26.579735867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:03:26.602619  335016 docker.go:307] overlay module found
	I0819 18:03:26.605426  335016 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0819 18:03:26.607651  335016 start.go:297] selected driver: docker
	I0819 18:03:26.607673  335016 start.go:901] validating driver "docker" against &{Name:functional-557654 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1724062045-19478@sha256:18a6788f22059eb28b337d2ac1f60d157ba1f4188844194d9df40beae3c7e41b Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0 ClusterName:functional-557654 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0819 18:03:26.607785  335016 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:03:26.611255  335016 out.go:201] 
	W0819 18:03:26.613719  335016 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0819 18:03:26.615834  335016 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-557654 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-557654 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-vtktb" [74a56e1c-265a-40fa-969d-7acf0e4ffded] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-vtktb" [74a56e1c-265a-40fa-969d-7acf0e4ffded] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.003831624s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30718
functional_test.go:1675: http://192.168.49.2:30718: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-vtktb

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30718
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.66s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [673d5620-dee1-4dc3-9cb2-a8c7a4d01531] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.005200303s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-557654 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-557654 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-557654 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-557654 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c7a5cee0-223f-4713-8eab-bbf3700bca43] Pending
helpers_test.go:344: "sp-pod" [c7a5cee0-223f-4713-8eab-bbf3700bca43] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c7a5cee0-223f-4713-8eab-bbf3700bca43] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.004106886s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-557654 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-557654 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-557654 delete -f testdata/storage-provisioner/pod.yaml: (1.128454556s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-557654 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [bc9122a9-c93e-48d0-8527-d94bad2a1290] Pending
helpers_test.go:344: "sp-pod" [bc9122a9-c93e-48d0-8527-d94bad2a1290] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.003945251s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-557654 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.16s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh -n functional-557654 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cp functional-557654:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2763585151/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh -n functional-557654 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh -n functional-557654 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/300020/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo cat /etc/test/nested/copy/300020/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/300020.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo cat /etc/ssl/certs/300020.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/300020.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo cat /usr/share/ca-certificates/300020.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3000202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo cat /etc/ssl/certs/3000202.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3000202.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo cat /usr/share/ca-certificates/3000202.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-557654 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 ssh "sudo systemctl is-active docker": exit status 1 (375.399502ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 ssh "sudo systemctl is-active crio": exit status 1 (385.210247ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 version -o=json --components: (1.189409992s)
--- PASS: TestFunctional/parallel/Version/components (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-557654 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.0
registry.k8s.io/kube-proxy:v1.31.0
registry.k8s.io/kube-controller-manager:v1.31.0
registry.k8s.io/kube-apiserver:v1.31.0
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-557654
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kindest/kindnetd:v20240730-75a5af0c
docker.io/kicbase/echo-server:functional-557654
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-557654 image ls --format short --alsologtostderr:
I0819 18:03:35.843751  336665 out.go:345] Setting OutFile to fd 1 ...
I0819 18:03:35.843921  336665 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:35.843931  336665 out.go:358] Setting ErrFile to fd 2...
I0819 18:03:35.843936  336665 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:35.844160  336665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
I0819 18:03:35.844808  336665 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:35.844935  336665 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:35.845409  336665 cli_runner.go:164] Run: docker container inspect functional-557654 --format={{.State.Status}}
I0819 18:03:35.865568  336665 ssh_runner.go:195] Run: systemctl --version
I0819 18:03:35.865628  336665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-557654
I0819 18:03:35.882965  336665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33156 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/functional-557654/id_rsa Username:docker}
I0819 18:03:35.978160  336665 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-557654 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-557654  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-proxy                  | v1.31.0            | sha256:71d55d | 26.8MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/minikube-local-cache-test | functional-557654  | sha256:b659ef | 989B   |
| docker.io/library/nginx                     | latest             | sha256:a9dfdb | 67.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.1            | sha256:2437cf | 16.5MB |
| registry.k8s.io/kube-scheduler              | v1.31.0            | sha256:fbbbd4 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/nginx                     | alpine             | sha256:70594c | 19.6MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.0            | sha256:fcb068 | 23.9MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| docker.io/kindest/kindnetd                  | v20240730-75a5af0c | sha256:d5e283 | 33.3MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-apiserver              | v1.31.0            | sha256:cd0f0a | 25.7MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-557654 image ls --format table --alsologtostderr:
I0819 18:03:38.183578  336888 out.go:345] Setting OutFile to fd 1 ...
I0819 18:03:38.186557  336888 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:38.186607  336888 out.go:358] Setting ErrFile to fd 2...
I0819 18:03:38.186632  336888 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:38.186938  336888 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
I0819 18:03:38.187651  336888 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:38.187842  336888 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:38.188382  336888 cli_runner.go:164] Run: docker container inspect functional-557654 --format={{.State.Status}}
I0819 18:03:38.219417  336888 ssh_runner.go:195] Run: systemctl --version
I0819 18:03:38.219482  336888 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-557654
I0819 18:03:38.245379  336888 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33156 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/functional-557654/id_rsa Username:docker}
I0819 18:03:38.342465  336888 ssh_runner.go:195] Run: sudo crictl images --output json
E0819 18:03:39.057652  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-557654 image ls --format json --alsologtostderr:
[{"id":"sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753","repoDigests":["docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19627164"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.0"],"size":"23947353"},{"id":"sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89","repoDigests":["registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe"],"repoTags":["reg
istry.k8s.io/kube-proxy:v1.31.0"],"size":"26752334"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:b659eff742f44a23c75068b46d24e9697d62dfa36708c97d05ae49ea5744dd21","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-557654"],"size":"989"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388","repoDigests":["registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.0"],"size":"25688321"},{"id":"sha25
6:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb","repoDigests":["registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.0"],"size":"18505843"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc","repoDigests":["docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add"],"repoTags":["docker.io/library/nginx:latest"],"size":"67690150"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kind
est/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93","repoDigests":["registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"16482581"},{"id":"sha256:72565bf5b
bedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-557654"],"size":"2173567"},{"id":"sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806","repoDigests":["docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3"],"repoTags":["docker.io/kindest/kindnetd:v20240730-75a5af0c"],"size":"33305789"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-557654 image ls --format json --alsologtostderr:
I0819 18:03:37.895643  336854 out.go:345] Setting OutFile to fd 1 ...
I0819 18:03:37.895792  336854 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:37.895798  336854 out.go:358] Setting ErrFile to fd 2...
I0819 18:03:37.895803  336854 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:37.896064  336854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
I0819 18:03:37.896776  336854 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:37.896904  336854 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:37.897398  336854 cli_runner.go:164] Run: docker container inspect functional-557654 --format={{.State.Status}}
I0819 18:03:37.917075  336854 ssh_runner.go:195] Run: systemctl --version
I0819 18:03:37.917129  336854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-557654
I0819 18:03:37.938650  336854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33156 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/functional-557654/id_rsa Username:docker}
I0819 18:03:38.040018  336854 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-557654 image ls --format yaml --alsologtostderr:
- id: sha256:fcb0683e6bdbd083710cf2d6fd7eb699c77fe4994c38a5c82d059e2e3cb4c2fd
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:f6f3c33dda209e8434b83dacf5244c03b59b0018d93325ff21296a142b68497d
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.0
size: "23947353"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-557654
size: "2173567"
- id: sha256:a9dfdba8b719078c5705fdecd6f8315765cc79e473111aa9451551ddc340b2bc
repoDigests:
- docker.io/library/nginx@sha256:447a8665cc1dab95b1ca778e162215839ccbb9189104c79d7ec3a81e14577add
repoTags:
- docker.io/library/nginx:latest
size: "67690150"
- id: sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "16482581"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:fbbbd428abb4dae52ab3018797d00d5840a739f0cc5697b662791831a60b0adb
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:96ddae9c9b2e79342e0551e2d2ec422c0c02629a74d928924aaa069706619808
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.0
size: "18505843"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:b659eff742f44a23c75068b46d24e9697d62dfa36708c97d05ae49ea5744dd21
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-557654
size: "989"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:d5e283bc63d431d0446af8b48a1618696def3b777347a97b8b3553d2c989c806
repoDigests:
- docker.io/kindest/kindnetd@sha256:4067b91686869e19bac601aec305ba55d2e74cdcb91347869bfb4fd3a26cd3c3
repoTags:
- docker.io/kindest/kindnetd:v20240730-75a5af0c
size: "33305789"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:70594c812316a9bc20dd5d679982c6322dc7cf0128687ae9f849d0207783e753
repoDigests:
- docker.io/library/nginx@sha256:c04c18adc2a407740a397c8407c011fc6c90026a9b65cceddef7ae5484360158
repoTags:
- docker.io/library/nginx:alpine
size: "19627164"
- id: sha256:cd0f0ae0ec9e0cdc092079156c122bf034ba3f24d31c1b1dd1b52a42ecf9b388
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:470179274deb9dc3a81df55cfc24823ce153147d4ebf2ed649a4f271f51eaddf
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.0
size: "25688321"
- id: sha256:71d55d66fd4eec8986225089a135fadd96bc6624d987096808772ce1e1924d89
repoDigests:
- registry.k8s.io/kube-proxy@sha256:c727efb1c6f15a68060bf7f207f5c7a765355b7e3340c513e582ec819c5cd2fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.0
size: "26752334"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-557654 image ls --format yaml --alsologtostderr:
I0819 18:03:36.087904  336695 out.go:345] Setting OutFile to fd 1 ...
I0819 18:03:36.088263  336695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:36.088276  336695 out.go:358] Setting ErrFile to fd 2...
I0819 18:03:36.088282  336695 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:36.088550  336695 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
I0819 18:03:36.089264  336695 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:36.089393  336695 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:36.089950  336695 cli_runner.go:164] Run: docker container inspect functional-557654 --format={{.State.Status}}
I0819 18:03:36.113701  336695 ssh_runner.go:195] Run: systemctl --version
I0819 18:03:36.113760  336695 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-557654
I0819 18:03:36.131268  336695 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33156 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/functional-557654/id_rsa Username:docker}
I0819 18:03:36.222193  336695 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh pgrep buildkitd
E0819 18:03:36.487250  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:03:36.494066  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:03:36.505574  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:03:36.526978  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 ssh pgrep buildkitd: exit status 1 (256.926975ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image build -t localhost/my-image:functional-557654 testdata/build --alsologtostderr
E0819 18:03:36.568851  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:03:36.651057  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:03:36.813169  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:03:37.134888  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
2024/08/19 18:03:37 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 image build -t localhost/my-image:functional-557654 testdata/build --alsologtostderr: (2.784672559s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-557654 image build -t localhost/my-image:functional-557654 testdata/build --alsologtostderr:
I0819 18:03:36.588085  336785 out.go:345] Setting OutFile to fd 1 ...
I0819 18:03:36.588655  336785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:36.588670  336785 out.go:358] Setting ErrFile to fd 2...
I0819 18:03:36.588676  336785 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0819 18:03:36.588921  336785 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
I0819 18:03:36.589581  336785 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:36.590755  336785 config.go:182] Loaded profile config "functional-557654": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
I0819 18:03:36.591415  336785 cli_runner.go:164] Run: docker container inspect functional-557654 --format={{.State.Status}}
I0819 18:03:36.610946  336785 ssh_runner.go:195] Run: systemctl --version
I0819 18:03:36.611006  336785 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-557654
I0819 18:03:36.627769  336785 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33156 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/functional-557654/id_rsa Username:docker}
I0819 18:03:36.718340  336785 build_images.go:161] Building image from path: /tmp/build.631563492.tar
I0819 18:03:36.718440  336785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0819 18:03:36.727868  336785 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.631563492.tar
I0819 18:03:36.731193  336785 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.631563492.tar: stat -c "%s %y" /var/lib/minikube/build/build.631563492.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.631563492.tar': No such file or directory
I0819 18:03:36.731222  336785 ssh_runner.go:362] scp /tmp/build.631563492.tar --> /var/lib/minikube/build/build.631563492.tar (3072 bytes)
I0819 18:03:36.759020  336785 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.631563492
I0819 18:03:36.767933  336785 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.631563492 -xf /var/lib/minikube/build/build.631563492.tar
I0819 18:03:36.776873  336785 containerd.go:394] Building image: /var/lib/minikube/build/build.631563492
I0819 18:03:36.776945  336785 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.631563492 --local dockerfile=/var/lib/minikube/build/build.631563492 --output type=image,name=localhost/my-image:functional-557654
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.2s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.4s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.8s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:8ef8ae7b2a41ea799e0176621aa80af534f73bc8cd8d4b637dd3640f396456d7 0.0s done
#8 exporting config sha256:78e722ed529f98834ae280af6c424cbf0151350ebc2e64fd3dfa678dbe3ef4c5 0.0s done
#8 naming to localhost/my-image:functional-557654 done
#8 DONE 0.1s
I0819 18:03:39.294980  336785 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.631563492 --local dockerfile=/var/lib/minikube/build/build.631563492 --output type=image,name=localhost/my-image:functional-557654: (2.518003677s)
I0819 18:03:39.295047  336785 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.631563492
I0819 18:03:39.304925  336785 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.631563492.tar
I0819 18:03:39.314653  336785 build_images.go:217] Built localhost/my-image:functional-557654 from /tmp/build.631563492.tar
I0819 18:03:39.314685  336785 build_images.go:133] succeeded building to: functional-557654
I0819 18:03:39.314691  336785 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-557654
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image load --daemon kicbase/echo-server:functional-557654 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 image load --daemon kicbase/echo-server:functional-557654 --alsologtostderr: (1.21556424s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image load --daemon kicbase/echo-server:functional-557654 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 image load --daemon kicbase/echo-server:functional-557654 --alsologtostderr: (1.108875789s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "372.749745ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "88.100513ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-557654
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image load --daemon kicbase/echo-server:functional-557654 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-557654 image load --daemon kicbase/echo-server:functional-557654 --alsologtostderr: (1.080573816s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "395.987227ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "76.052804ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-557654 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-557654 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-557654 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 332420: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-557654 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image save kicbase/echo-server:functional-557654 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-557654 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-557654 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [180e1643-1656-42d9-a250-26ad871130b1] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [180e1643-1656-42d9-a250-26ad871130b1] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004290039s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image rm kicbase/echo-server:functional-557654 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-557654
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 image save --daemon kicbase/echo-server:functional-557654 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-557654
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-557654 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.198.197 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-557654 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-557654 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-557654 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-5dhgc" [bf69023e-ab52-48ce-b2e9-53934bf44061] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-5dhgc" [bf69023e-ab52-48ce-b2e9-53934bf44061] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.006700734s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdany-port3214538643/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1724090601610406544" to /tmp/TestFunctionalparallelMountCmdany-port3214538643/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1724090601610406544" to /tmp/TestFunctionalparallelMountCmdany-port3214538643/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1724090601610406544" to /tmp/TestFunctionalparallelMountCmdany-port3214538643/001/test-1724090601610406544
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (354.50359ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Aug 19 18:03 created-by-test
-rw-r--r-- 1 docker docker 24 Aug 19 18:03 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Aug 19 18:03 test-1724090601610406544
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh cat /mount-9p/test-1724090601610406544
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-557654 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f9e62aa2-8562-4bd2-9847-9a59918a89be] Pending
helpers_test.go:344: "busybox-mount" [f9e62aa2-8562-4bd2-9847-9a59918a89be] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f9e62aa2-8562-4bd2-9847-9a59918a89be] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f9e62aa2-8562-4bd2-9847-9a59918a89be] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003655492s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-557654 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdany-port3214538643/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 service list -o json
functional_test.go:1494: Took "598.339985ms" to run "out/minikube-linux-arm64 -p functional-557654 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31838
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31838
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdspecific-port4267712342/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (388.022594ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdspecific-port4267712342/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 ssh "sudo umount -f /mount-9p": exit status 1 (354.77348ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-557654 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdspecific-port4267712342/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937337981/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937337981/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937337981/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T" /mount1: exit status 1 (861.271099ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-557654 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-557654 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937337981/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937337981/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-557654 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2937337981/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-557654
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-557654
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-557654
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-049090 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 18:03:46.741206  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:03:56.982856  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:04:17.464580  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:04:58.426496  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-049090 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m51.987133361s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (112.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (31.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-049090 -- rollout status deployment/busybox: (28.473817744s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-9grk2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-gxvjf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-p458s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-9grk2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-gxvjf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-p458s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-9grk2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-gxvjf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-p458s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (31.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-9grk2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-9grk2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-gxvjf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-gxvjf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-p458s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-049090 -- exec busybox-7dff88458-p458s -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (22.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-049090 -v=7 --alsologtostderr
E0819 18:06:20.350853  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-049090 -v=7 --alsologtostderr: (21.404335792s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr: (1.056340628s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (22.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-049090 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-049090 status --output json -v=7 --alsologtostderr: (1.007822845s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp testdata/cp-test.txt ha-049090:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2256227614/001/cp-test_ha-049090.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090:/home/docker/cp-test.txt ha-049090-m02:/home/docker/cp-test_ha-049090_ha-049090-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m02 "sudo cat /home/docker/cp-test_ha-049090_ha-049090-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090:/home/docker/cp-test.txt ha-049090-m03:/home/docker/cp-test_ha-049090_ha-049090-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m03 "sudo cat /home/docker/cp-test_ha-049090_ha-049090-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090:/home/docker/cp-test.txt ha-049090-m04:/home/docker/cp-test_ha-049090_ha-049090-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m04 "sudo cat /home/docker/cp-test_ha-049090_ha-049090-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp testdata/cp-test.txt ha-049090-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2256227614/001/cp-test_ha-049090-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m02:/home/docker/cp-test.txt ha-049090:/home/docker/cp-test_ha-049090-m02_ha-049090.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090 "sudo cat /home/docker/cp-test_ha-049090-m02_ha-049090.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m02:/home/docker/cp-test.txt ha-049090-m03:/home/docker/cp-test_ha-049090-m02_ha-049090-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m03 "sudo cat /home/docker/cp-test_ha-049090-m02_ha-049090-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m02:/home/docker/cp-test.txt ha-049090-m04:/home/docker/cp-test_ha-049090-m02_ha-049090-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m04 "sudo cat /home/docker/cp-test_ha-049090-m02_ha-049090-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp testdata/cp-test.txt ha-049090-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2256227614/001/cp-test_ha-049090-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m03:/home/docker/cp-test.txt ha-049090:/home/docker/cp-test_ha-049090-m03_ha-049090.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090 "sudo cat /home/docker/cp-test_ha-049090-m03_ha-049090.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m03:/home/docker/cp-test.txt ha-049090-m02:/home/docker/cp-test_ha-049090-m03_ha-049090-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m02 "sudo cat /home/docker/cp-test_ha-049090-m03_ha-049090-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m03:/home/docker/cp-test.txt ha-049090-m04:/home/docker/cp-test_ha-049090-m03_ha-049090-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m04 "sudo cat /home/docker/cp-test_ha-049090-m03_ha-049090-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp testdata/cp-test.txt ha-049090-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2256227614/001/cp-test_ha-049090-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m04:/home/docker/cp-test.txt ha-049090:/home/docker/cp-test_ha-049090-m04_ha-049090.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090 "sudo cat /home/docker/cp-test_ha-049090-m04_ha-049090.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m04:/home/docker/cp-test.txt ha-049090-m02:/home/docker/cp-test_ha-049090-m04_ha-049090-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m02 "sudo cat /home/docker/cp-test_ha-049090-m04_ha-049090-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 cp ha-049090-m04:/home/docker/cp-test.txt ha-049090-m03:/home/docker/cp-test_ha-049090-m04_ha-049090-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 ssh -n ha-049090-m03 "sudo cat /home/docker/cp-test_ha-049090-m04_ha-049090-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-049090 node stop m02 -v=7 --alsologtostderr: (12.162685028s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr: exit status 7 (747.313209ms)

                                                
                                                
-- stdout --
	ha-049090
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-049090-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-049090-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-049090-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:07:03.083517  353059 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:07:03.083718  353059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:03.083748  353059 out.go:358] Setting ErrFile to fd 2...
	I0819 18:07:03.083761  353059 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:07:03.084121  353059 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 18:07:03.084370  353059 out.go:352] Setting JSON to false
	I0819 18:07:03.084411  353059 mustload.go:65] Loading cluster: ha-049090
	I0819 18:07:03.084553  353059 notify.go:220] Checking for updates...
	I0819 18:07:03.084888  353059 config.go:182] Loaded profile config "ha-049090": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 18:07:03.084936  353059 status.go:255] checking status of ha-049090 ...
	I0819 18:07:03.086354  353059 cli_runner.go:164] Run: docker container inspect ha-049090 --format={{.State.Status}}
	I0819 18:07:03.106132  353059 status.go:330] ha-049090 host status = "Running" (err=<nil>)
	I0819 18:07:03.106164  353059 host.go:66] Checking if "ha-049090" exists ...
	I0819 18:07:03.106532  353059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-049090
	I0819 18:07:03.134312  353059 host.go:66] Checking if "ha-049090" exists ...
	I0819 18:07:03.134680  353059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:07:03.134734  353059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-049090
	I0819 18:07:03.155885  353059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33161 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/ha-049090/id_rsa Username:docker}
	I0819 18:07:03.256394  353059 ssh_runner.go:195] Run: systemctl --version
	I0819 18:07:03.261207  353059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:07:03.275754  353059 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:07:03.334250  353059 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-08-19 18:07:03.323490773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:07:03.334998  353059 kubeconfig.go:125] found "ha-049090" server: "https://192.168.49.254:8443"
	I0819 18:07:03.335037  353059 api_server.go:166] Checking apiserver status ...
	I0819 18:07:03.335086  353059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:07:03.348244  353059 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1470/cgroup
	I0819 18:07:03.358893  353059 api_server.go:182] apiserver freezer: "5:freezer:/docker/a80dcd7174b8ad3d58b73bd15e92f94d7a4b8b3f120fcc3e5fab7a518d336790/kubepods/burstable/podce8c941a3c87032e395f5f3309d5b58c/a420122fe027f4d362cbfc9a1fb1a2df8a1d9dbb764470c465b4f10451583210"
	I0819 18:07:03.358986  353059 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a80dcd7174b8ad3d58b73bd15e92f94d7a4b8b3f120fcc3e5fab7a518d336790/kubepods/burstable/podce8c941a3c87032e395f5f3309d5b58c/a420122fe027f4d362cbfc9a1fb1a2df8a1d9dbb764470c465b4f10451583210/freezer.state
	I0819 18:07:03.368482  353059 api_server.go:204] freezer state: "THAWED"
	I0819 18:07:03.368513  353059 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 18:07:03.378183  353059 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 18:07:03.378213  353059 status.go:422] ha-049090 apiserver status = Running (err=<nil>)
	I0819 18:07:03.378225  353059 status.go:257] ha-049090 status: &{Name:ha-049090 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:07:03.378242  353059 status.go:255] checking status of ha-049090-m02 ...
	I0819 18:07:03.378590  353059 cli_runner.go:164] Run: docker container inspect ha-049090-m02 --format={{.State.Status}}
	I0819 18:07:03.395977  353059 status.go:330] ha-049090-m02 host status = "Stopped" (err=<nil>)
	I0819 18:07:03.396007  353059 status.go:343] host is not running, skipping remaining checks
	I0819 18:07:03.396016  353059 status.go:257] ha-049090-m02 status: &{Name:ha-049090-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:07:03.396039  353059 status.go:255] checking status of ha-049090-m03 ...
	I0819 18:07:03.396354  353059 cli_runner.go:164] Run: docker container inspect ha-049090-m03 --format={{.State.Status}}
	I0819 18:07:03.413555  353059 status.go:330] ha-049090-m03 host status = "Running" (err=<nil>)
	I0819 18:07:03.413585  353059 host.go:66] Checking if "ha-049090-m03" exists ...
	I0819 18:07:03.413992  353059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-049090-m03
	I0819 18:07:03.431683  353059 host.go:66] Checking if "ha-049090-m03" exists ...
	I0819 18:07:03.432013  353059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:07:03.432060  353059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-049090-m03
	I0819 18:07:03.451585  353059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33171 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/ha-049090-m03/id_rsa Username:docker}
	I0819 18:07:03.543801  353059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:07:03.556253  353059 kubeconfig.go:125] found "ha-049090" server: "https://192.168.49.254:8443"
	I0819 18:07:03.556285  353059 api_server.go:166] Checking apiserver status ...
	I0819 18:07:03.556326  353059 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:07:03.567835  353059 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1342/cgroup
	I0819 18:07:03.577393  353059 api_server.go:182] apiserver freezer: "5:freezer:/docker/51cb1e009b98c43f14617e43644b652313105d9305e66c11ecf4814dbe27483f/kubepods/burstable/pod76adfc821226a90e27c7538b1e39707a/4ee84fa575e9987c2a6c95f130ae06fd1cee4f945cb7e75f41514ec739b4cd6f"
	I0819 18:07:03.577490  353059 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/51cb1e009b98c43f14617e43644b652313105d9305e66c11ecf4814dbe27483f/kubepods/burstable/pod76adfc821226a90e27c7538b1e39707a/4ee84fa575e9987c2a6c95f130ae06fd1cee4f945cb7e75f41514ec739b4cd6f/freezer.state
	I0819 18:07:03.586468  353059 api_server.go:204] freezer state: "THAWED"
	I0819 18:07:03.586499  353059 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0819 18:07:03.594302  353059 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0819 18:07:03.594330  353059 status.go:422] ha-049090-m03 apiserver status = Running (err=<nil>)
	I0819 18:07:03.594340  353059 status.go:257] ha-049090-m03 status: &{Name:ha-049090-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:07:03.594369  353059 status.go:255] checking status of ha-049090-m04 ...
	I0819 18:07:03.594751  353059 cli_runner.go:164] Run: docker container inspect ha-049090-m04 --format={{.State.Status}}
	I0819 18:07:03.612166  353059 status.go:330] ha-049090-m04 host status = "Running" (err=<nil>)
	I0819 18:07:03.612212  353059 host.go:66] Checking if "ha-049090-m04" exists ...
	I0819 18:07:03.612525  353059 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-049090-m04
	I0819 18:07:03.636699  353059 host.go:66] Checking if "ha-049090-m04" exists ...
	I0819 18:07:03.637015  353059 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:07:03.637061  353059 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-049090-m04
	I0819 18:07:03.654357  353059 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33176 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/ha-049090-m04/id_rsa Username:docker}
	I0819 18:07:03.750703  353059 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:07:03.764781  353059 status.go:257] ha-049090-m04 status: &{Name:ha-049090-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-049090 node start m02 -v=7 --alsologtostderr: (17.640884658s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (142.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-049090 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-049090 -v=7 --alsologtostderr
E0819 18:07:55.070924  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:55.077453  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:55.088893  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:55.110474  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:55.152016  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:55.233600  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:55.395242  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:55.716950  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:56.359117  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:07:57.640520  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:08:00.204302  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-049090 -v=7 --alsologtostderr: (37.317849353s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-049090 --wait=true -v=7 --alsologtostderr
E0819 18:08:05.329382  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:08:15.571621  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:08:36.053150  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:08:36.487088  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:04.192892  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:09:17.015418  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-049090 --wait=true -v=7 --alsologtostderr: (1m44.558112179s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-049090
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (142.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-049090 node delete m03 -v=7 --alsologtostderr: (9.683980818s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-049090 stop -v=7 --alsologtostderr: (35.961080898s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr: exit status 7 (108.514797ms)

                                                
                                                
-- stdout --
	ha-049090
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-049090-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-049090-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:10:33.150777  367353 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:10:33.151271  367353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:10:33.151319  367353 out.go:358] Setting ErrFile to fd 2...
	I0819 18:10:33.151341  367353 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:10:33.151645  367353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 18:10:33.151898  367353 out.go:352] Setting JSON to false
	I0819 18:10:33.151972  367353 mustload.go:65] Loading cluster: ha-049090
	I0819 18:10:33.152059  367353 notify.go:220] Checking for updates...
	I0819 18:10:33.152478  367353 config.go:182] Loaded profile config "ha-049090": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 18:10:33.152808  367353 status.go:255] checking status of ha-049090 ...
	I0819 18:10:33.153539  367353 cli_runner.go:164] Run: docker container inspect ha-049090 --format={{.State.Status}}
	I0819 18:10:33.170483  367353 status.go:330] ha-049090 host status = "Stopped" (err=<nil>)
	I0819 18:10:33.170507  367353 status.go:343] host is not running, skipping remaining checks
	I0819 18:10:33.170514  367353 status.go:257] ha-049090 status: &{Name:ha-049090 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:10:33.170549  367353 status.go:255] checking status of ha-049090-m02 ...
	I0819 18:10:33.170887  367353 cli_runner.go:164] Run: docker container inspect ha-049090-m02 --format={{.State.Status}}
	I0819 18:10:33.194841  367353 status.go:330] ha-049090-m02 host status = "Stopped" (err=<nil>)
	I0819 18:10:33.194866  367353 status.go:343] host is not running, skipping remaining checks
	I0819 18:10:33.194874  367353 status.go:257] ha-049090-m02 status: &{Name:ha-049090-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:10:33.194896  367353 status.go:255] checking status of ha-049090-m04 ...
	I0819 18:10:33.195204  367353 cli_runner.go:164] Run: docker container inspect ha-049090-m04 --format={{.State.Status}}
	I0819 18:10:33.213979  367353 status.go:330] ha-049090-m04 host status = "Stopped" (err=<nil>)
	I0819 18:10:33.214003  367353 status.go:343] host is not running, skipping remaining checks
	I0819 18:10:33.214011  367353 status.go:257] ha-049090-m04 status: &{Name:ha-049090-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-049090 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 18:10:38.937412  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-049090 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.1933123s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.18s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (48.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-049090 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-049090 --control-plane -v=7 --alsologtostderr: (47.192592605s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-049090 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (48.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (61.54s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-919841 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0819 18:12:55.070978  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:13:22.778767  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:13:36.487358  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-919841 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m1.524658054s)
--- PASS: TestJSONOutput/start/Command (61.54s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-919841 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-919841 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-919841 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-919841 --output=json --user=testUser: (5.772238479s)
--- PASS: TestJSONOutput/stop/Command (5.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-231320 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-231320 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (85.178192ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a055256c-4c7c-4225-85e2-fec2a1d726f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-231320] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"68dc7d0e-ca69-416b-8a95-2b2f3bcd220a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19478"}}
	{"specversion":"1.0","id":"b1adb0d1-8b7d-4248-a240-4b112e765ab7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e202e740-1ef1-404e-b7f7-b6f83b3314d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig"}}
	{"specversion":"1.0","id":"2142ccf7-1ab3-44f0-b0f9-3c0a7d5fcbe4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube"}}
	{"specversion":"1.0","id":"7591c148-6755-492e-84cb-16078b438262","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ab61f406-3b76-4a54-a4e9-edf1bf27fed9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"400a4cb0-7256-4b78-8140-dceb0996f27b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-231320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-231320
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.52s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-057773 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-057773 --network=: (37.318017402s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-057773" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-057773
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-057773: (2.171583449s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.52s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.5s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-888413 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-888413 --network=bridge: (31.555002049s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-888413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-888413
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-888413: (1.917236229s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.50s)

                                                
                                    
x
+
TestKicExistingNetwork (34.27s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-334417 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-334417 --network=existing-network: (32.102936411s)
helpers_test.go:175: Cleaning up "existing-network-334417" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-334417
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-334417: (1.97445849s)
--- PASS: TestKicExistingNetwork (34.27s)

                                                
                                    
x
+
TestKicCustomSubnet (33.67s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-816644 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-816644 --subnet=192.168.60.0/24: (31.535434912s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-816644 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-816644" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-816644
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-816644: (2.112505428s)
--- PASS: TestKicCustomSubnet (33.67s)

                                                
                                    
x
+
TestKicStaticIP (35.63s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-728041 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-728041 --static-ip=192.168.200.200: (33.416849058s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-728041 ip
helpers_test.go:175: Cleaning up "static-ip-728041" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-728041
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-728041: (2.049673098s)
--- PASS: TestKicStaticIP (35.63s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.57s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-682775 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-682775 --driver=docker  --container-runtime=containerd: (30.880322189s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-685795 --driver=docker  --container-runtime=containerd
E0819 18:17:55.070902  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-685795 --driver=docker  --container-runtime=containerd: (32.090656287s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-682775
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-685795
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-685795" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-685795
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-685795: (2.143202255s)
helpers_test.go:175: Cleaning up "first-682775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-682775
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-682775: (2.208770454s)
--- PASS: TestMinikubeProfile (68.57s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (6.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-005452 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-005452 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.566216428s)
--- PASS: TestMountStart/serial/StartWithMountFirst (6.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-005452 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.11s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-018728 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-018728 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.107333386s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.11s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-018728 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-005452 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-005452 --alsologtostderr -v=5: (1.591621965s)
--- PASS: TestMountStart/serial/DeleteFirst (1.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-018728 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-018728
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-018728: (1.20375147s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.59s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-018728
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-018728: (6.587966337s)
--- PASS: TestMountStart/serial/RestartStopped (7.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-018728 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-060148 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 18:18:36.486909  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-060148 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.714586119s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.29s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (17.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- rollout status deployment/busybox
E0819 18:19:59.555040  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-060148 -- rollout status deployment/busybox: (15.410336283s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-dxbzm -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-kz69s -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-dxbzm -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-kz69s -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-dxbzm -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-kz69s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (17.46s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-dxbzm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-dxbzm -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-kz69s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-060148 -- exec busybox-7dff88458-kz69s -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (15.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-060148 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-060148 -v 3 --alsologtostderr: (15.002965298s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (15.68s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-060148 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp testdata/cp-test.txt multinode-060148:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3995763638/001/cp-test_multinode-060148.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148:/home/docker/cp-test.txt multinode-060148-m02:/home/docker/cp-test_multinode-060148_multinode-060148-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m02 "sudo cat /home/docker/cp-test_multinode-060148_multinode-060148-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148:/home/docker/cp-test.txt multinode-060148-m03:/home/docker/cp-test_multinode-060148_multinode-060148-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m03 "sudo cat /home/docker/cp-test_multinode-060148_multinode-060148-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp testdata/cp-test.txt multinode-060148-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3995763638/001/cp-test_multinode-060148-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148-m02:/home/docker/cp-test.txt multinode-060148:/home/docker/cp-test_multinode-060148-m02_multinode-060148.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148 "sudo cat /home/docker/cp-test_multinode-060148-m02_multinode-060148.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148-m02:/home/docker/cp-test.txt multinode-060148-m03:/home/docker/cp-test_multinode-060148-m02_multinode-060148-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m03 "sudo cat /home/docker/cp-test_multinode-060148-m02_multinode-060148-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp testdata/cp-test.txt multinode-060148-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3995763638/001/cp-test_multinode-060148-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148-m03:/home/docker/cp-test.txt multinode-060148:/home/docker/cp-test_multinode-060148-m03_multinode-060148.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148 "sudo cat /home/docker/cp-test_multinode-060148-m03_multinode-060148.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 cp multinode-060148-m03:/home/docker/cp-test.txt multinode-060148-m02:/home/docker/cp-test_multinode-060148-m03_multinode-060148-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 ssh -n multinode-060148-m02 "sudo cat /home/docker/cp-test_multinode-060148-m03_multinode-060148-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.01s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-060148 node stop m03: (1.228262847s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-060148 status: exit status 7 (506.849462ms)

                                                
                                                
-- stdout --
	multinode-060148
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-060148-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-060148-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-060148 status --alsologtostderr: exit status 7 (523.207638ms)

                                                
                                                
-- stdout --
	multinode-060148
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-060148-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-060148-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:20:39.663009  420889 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:20:39.663204  420889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:20:39.663232  420889 out.go:358] Setting ErrFile to fd 2...
	I0819 18:20:39.663252  420889 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:20:39.663526  420889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 18:20:39.663762  420889 out.go:352] Setting JSON to false
	I0819 18:20:39.663837  420889 mustload.go:65] Loading cluster: multinode-060148
	I0819 18:20:39.663940  420889 notify.go:220] Checking for updates...
	I0819 18:20:39.664323  420889 config.go:182] Loaded profile config "multinode-060148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 18:20:39.664361  420889 status.go:255] checking status of multinode-060148 ...
	I0819 18:20:39.664922  420889 cli_runner.go:164] Run: docker container inspect multinode-060148 --format={{.State.Status}}
	I0819 18:20:39.688181  420889 status.go:330] multinode-060148 host status = "Running" (err=<nil>)
	I0819 18:20:39.688206  420889 host.go:66] Checking if "multinode-060148" exists ...
	I0819 18:20:39.688534  420889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-060148
	I0819 18:20:39.716130  420889 host.go:66] Checking if "multinode-060148" exists ...
	I0819 18:20:39.716496  420889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:20:39.716557  420889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-060148
	I0819 18:20:39.732913  420889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33281 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/multinode-060148/id_rsa Username:docker}
	I0819 18:20:39.830931  420889 ssh_runner.go:195] Run: systemctl --version
	I0819 18:20:39.835265  420889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:20:39.846852  420889 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:20:39.904974  420889 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-08-19 18:20:39.895207136 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:20:39.905536  420889 kubeconfig.go:125] found "multinode-060148" server: "https://192.168.67.2:8443"
	I0819 18:20:39.905570  420889 api_server.go:166] Checking apiserver status ...
	I0819 18:20:39.905613  420889 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0819 18:20:39.916921  420889 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1348/cgroup
	I0819 18:20:39.927461  420889 api_server.go:182] apiserver freezer: "5:freezer:/docker/1461f8ffbcfbd3a050b5a2bb9a05fe440221a8f71647a6691bbaa80449f24a4a/kubepods/burstable/podc7563c74c5e11806e914239f79005f64/c401f5256619f8a8a2163b8b314f7dfaa20677eff0a7d8199b4f40b5a7f0194b"
	I0819 18:20:39.927543  420889 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1461f8ffbcfbd3a050b5a2bb9a05fe440221a8f71647a6691bbaa80449f24a4a/kubepods/burstable/podc7563c74c5e11806e914239f79005f64/c401f5256619f8a8a2163b8b314f7dfaa20677eff0a7d8199b4f40b5a7f0194b/freezer.state
	I0819 18:20:39.936296  420889 api_server.go:204] freezer state: "THAWED"
	I0819 18:20:39.936327  420889 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0819 18:20:39.944194  420889 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0819 18:20:39.944225  420889 status.go:422] multinode-060148 apiserver status = Running (err=<nil>)
	I0819 18:20:39.944237  420889 status.go:257] multinode-060148 status: &{Name:multinode-060148 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:20:39.944254  420889 status.go:255] checking status of multinode-060148-m02 ...
	I0819 18:20:39.944559  420889 cli_runner.go:164] Run: docker container inspect multinode-060148-m02 --format={{.State.Status}}
	I0819 18:20:39.962082  420889 status.go:330] multinode-060148-m02 host status = "Running" (err=<nil>)
	I0819 18:20:39.962109  420889 host.go:66] Checking if "multinode-060148-m02" exists ...
	I0819 18:20:39.962440  420889 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-060148-m02
	I0819 18:20:39.978840  420889 host.go:66] Checking if "multinode-060148-m02" exists ...
	I0819 18:20:39.979150  420889 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0819 18:20:39.979244  420889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-060148-m02
	I0819 18:20:39.997206  420889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33286 SSHKeyPath:/home/jenkins/minikube-integration/19478-294620/.minikube/machines/multinode-060148-m02/id_rsa Username:docker}
	I0819 18:20:40.099393  420889 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0819 18:20:40.112759  420889 status.go:257] multinode-060148-m02 status: &{Name:multinode-060148-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:20:40.112812  420889 status.go:255] checking status of multinode-060148-m03 ...
	I0819 18:20:40.113287  420889 cli_runner.go:164] Run: docker container inspect multinode-060148-m03 --format={{.State.Status}}
	I0819 18:20:40.131367  420889 status.go:330] multinode-060148-m03 host status = "Stopped" (err=<nil>)
	I0819 18:20:40.131393  420889 status.go:343] host is not running, skipping remaining checks
	I0819 18:20:40.131402  420889 status.go:257] multinode-060148-m03 status: &{Name:multinode-060148-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-060148 node start m03 -v=7 --alsologtostderr: (8.796701982s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-060148
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-060148
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-060148: (24.963427169s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-060148 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-060148 --wait=true -v=8 --alsologtostderr: (1m4.633161461s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-060148
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.71s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-060148 node delete m03: (4.87219883s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.56s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-060148 stop: (23.84963342s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-060148 status: exit status 7 (91.221267ms)

                                                
                                                
-- stdout --
	multinode-060148
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-060148-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-060148 status --alsologtostderr: exit status 7 (83.187354ms)

                                                
                                                
-- stdout --
	multinode-060148
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-060148-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:22:48.970062  429341 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:22:48.970286  429341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:22:48.970319  429341 out.go:358] Setting ErrFile to fd 2...
	I0819 18:22:48.970340  429341 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:22:48.970616  429341 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 18:22:48.970834  429341 out.go:352] Setting JSON to false
	I0819 18:22:48.970902  429341 mustload.go:65] Loading cluster: multinode-060148
	I0819 18:22:48.970969  429341 notify.go:220] Checking for updates...
	I0819 18:22:48.971356  429341 config.go:182] Loaded profile config "multinode-060148": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 18:22:48.971389  429341 status.go:255] checking status of multinode-060148 ...
	I0819 18:22:48.971886  429341 cli_runner.go:164] Run: docker container inspect multinode-060148 --format={{.State.Status}}
	I0819 18:22:48.990927  429341 status.go:330] multinode-060148 host status = "Stopped" (err=<nil>)
	I0819 18:22:48.990949  429341 status.go:343] host is not running, skipping remaining checks
	I0819 18:22:48.990956  429341 status.go:257] multinode-060148 status: &{Name:multinode-060148 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0819 18:22:48.990987  429341 status.go:255] checking status of multinode-060148-m02 ...
	I0819 18:22:48.991298  429341 cli_runner.go:164] Run: docker container inspect multinode-060148-m02 --format={{.State.Status}}
	I0819 18:22:49.011517  429341 status.go:330] multinode-060148-m02 host status = "Stopped" (err=<nil>)
	I0819 18:22:49.011541  429341 status.go:343] host is not running, skipping remaining checks
	I0819 18:22:49.011549  429341 status.go:257] multinode-060148-m02 status: &{Name:multinode-060148-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (52.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-060148 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0819 18:22:55.070140  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:23:36.487501  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-060148 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (51.695625825s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-060148 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (52.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (35.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-060148
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-060148-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-060148-m02 --driver=docker  --container-runtime=containerd: exit status 14 (79.281152ms)

                                                
                                                
-- stdout --
	* [multinode-060148-m02] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-060148-m02' is duplicated with machine name 'multinode-060148-m02' in profile 'multinode-060148'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-060148-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-060148-m03 --driver=docker  --container-runtime=containerd: (33.17316079s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-060148
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-060148: exit status 80 (377.050207ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-060148 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-060148-m03 already exists in multinode-060148-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-060148-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-060148-m03: (2.01010532s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (35.69s)

                                                
                                    
x
+
TestPreload (119.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-938698 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-938698 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m21.395594537s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-938698 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-938698 image pull gcr.io/k8s-minikube/busybox: (1.202312425s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-938698
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-938698: (12.235057153s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-938698 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-938698 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (22.351480185s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-938698 image list
helpers_test.go:175: Cleaning up "test-preload-938698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-938698
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-938698: (2.430287994s)
--- PASS: TestPreload (119.95s)

                                                
                                    
x
+
TestScheduledStopUnix (104.07s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-537049 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-537049 --memory=2048 --driver=docker  --container-runtime=containerd: (28.070701754s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-537049 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-537049 -n scheduled-stop-537049
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-537049 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-537049 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-537049 -n scheduled-stop-537049
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-537049
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-537049 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0819 18:27:55.070943  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-537049
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-537049: exit status 7 (71.103297ms)

                                                
                                                
-- stdout --
	scheduled-stop-537049
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-537049 -n scheduled-stop-537049
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-537049 -n scheduled-stop-537049: exit status 7 (66.734665ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-537049" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-537049
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-537049: (4.37529972s)
--- PASS: TestScheduledStopUnix (104.07s)

                                                
                                    
x
+
TestInsufficientStorage (13s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-335785 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-335785 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.517895348s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d89b1fe6-500a-459f-9810-23ae09fc045a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-335785] minikube v1.33.1 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"643fd53a-c57f-4158-88e8-f0257af95271","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19478"}}
	{"specversion":"1.0","id":"1ea04d96-af59-4da6-8cb4-d494cb7dbabe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c306fc2c-a16b-46ff-b354-aa849c1aa311","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig"}}
	{"specversion":"1.0","id":"a0b4c780-582e-459d-90a1-01c1bc16a682","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube"}}
	{"specversion":"1.0","id":"d00f6fdd-7c24-4e86-be9b-4808a6981d87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"304eb8ca-2a02-406a-845b-fc4bdcd24c91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"846d6fbf-bb5b-425f-868e-127d77def29b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5b46f79a-a1d2-474d-bae4-5a9c6d717cf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"84755e42-f3ac-43bc-8bcb-70dd54ecc0bf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"0d57ebb3-d7c9-4d08-b44a-db1f9d298e93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"8aebd5b1-9cde-484e-bf3a-26db16cb6ca7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-335785\" primary control-plane node in \"insufficient-storage-335785\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"fd891bd7-2bfa-4fe5-a90a-c6415afe8ce7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1724062045-19478 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"0b976c74-13c4-4e4a-a0c5-742f26efc052","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"cc8f9a36-bf59-4053-a359-d90d03427113","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-335785 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-335785 --output=json --layout=cluster: exit status 7 (289.534344ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-335785","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-335785","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:28:15.914561  447935 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-335785" does not appear in /home/jenkins/minikube-integration/19478-294620/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-335785 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-335785 --output=json --layout=cluster: exit status 7 (290.1877ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-335785","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-335785","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0819 18:28:16.203487  447997 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-335785" does not appear in /home/jenkins/minikube-integration/19478-294620/kubeconfig
	E0819 18:28:16.214000  447997 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/insufficient-storage-335785/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-335785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-335785
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-335785: (1.897937998s)
--- PASS: TestInsufficientStorage (13.00s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (86.48s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.79357871 start -p running-upgrade-920325 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.79357871 start -p running-upgrade-920325 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (43.557423481s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-920325 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0819 18:36:39.557226  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-920325 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (39.216419992s)
helpers_test.go:175: Cleaning up "running-upgrade-920325" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-920325
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-920325: (2.780218437s)
--- PASS: TestRunningBinaryUpgrade (86.48s)

                                                
                                    
x
+
TestKubernetesUpgrade (356.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-787009 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-787009 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m6.166260855s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-787009
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-787009: (1.276840556s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-787009 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-787009 status --format={{.Host}}: exit status 7 (96.143072ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-787009 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-787009 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.328492231s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-787009 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-787009 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-787009 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (75.333441ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-787009] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-787009
	    minikube start -p kubernetes-upgrade-787009 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7870092 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0, by running:
	    
	    minikube start -p kubernetes-upgrade-787009 --kubernetes-version=v1.31.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-787009 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-787009 --memory=2200 --kubernetes-version=v1.31.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.889690554s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-787009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-787009
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-787009: (2.684289956s)
--- PASS: TestKubernetesUpgrade (356.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (118.54s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1076653926 start -p missing-upgrade-253898 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1076653926 start -p missing-upgrade-253898 --memory=2200 --driver=docker  --container-runtime=containerd: (49.012543993s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-253898
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-253898
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-253898 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-253898 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.28257581s)
helpers_test.go:175: Cleaning up "missing-upgrade-253898" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-253898
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-253898: (2.25257533s)
--- PASS: TestMissingContainerUpgrade (118.54s)

                                                
                                    
x
+
TestPause/serial/Start (61.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-824054 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-824054 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m1.648629955s)
--- PASS: TestPause/serial/Start (61.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-228064 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-228064 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (111.361403ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-228064] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-228064 --driver=docker  --container-runtime=containerd
E0819 18:28:36.487585  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-228064 --driver=docker  --container-runtime=containerd: (40.53709454s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-228064 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-228064 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-228064 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.646578831s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-228064 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-228064 status -o json: exit status 2 (324.687118ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-228064","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-228064
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-228064: (1.975687971s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-228064 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-228064 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.756461683s)
--- PASS: TestNoKubernetes/serial/Start (6.76s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.95s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-824054 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-824054 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.926939098s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-228064 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-228064 "sudo systemctl is-active --quiet service kubelet": exit status 1 (349.177112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-228064
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-228064: (1.255832315s)
--- PASS: TestNoKubernetes/serial/Stop (1.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-228064 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-228064 --driver=docker  --container-runtime=containerd: (7.115959181s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.12s)

                                                
                                    
x
+
TestPause/serial/Pause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-824054 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-824054 --alsologtostderr -v=5: (1.065379872s)
--- PASS: TestPause/serial/Pause (1.07s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.44s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-824054 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-824054 --output=json --layout=cluster: exit status 2 (435.543791ms)

                                                
                                                
-- stdout --
	{"Name":"pause-824054","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-824054","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.44s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.7s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-824054 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.70s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.86s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-824054 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.86s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.7s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-824054 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-824054 --alsologtostderr -v=5: (2.696760441s)
--- PASS: TestPause/serial/DeletePaused (2.70s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-824054
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-824054: exit status 1 (18.396739ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-824054: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-228064 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-228064 "sudo systemctl is-active --quiet service kubelet": exit status 1 (358.235222ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-003670 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-003670 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (248.905316ms)

                                                
                                                
-- stdout --
	* [false-003670] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19478
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0819 18:29:37.483882  460314 out.go:345] Setting OutFile to fd 1 ...
	I0819 18:29:37.484152  460314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:29:37.484186  460314 out.go:358] Setting ErrFile to fd 2...
	I0819 18:29:37.484208  460314 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0819 18:29:37.484493  460314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19478-294620/.minikube/bin
	I0819 18:29:37.484943  460314 out.go:352] Setting JSON to false
	I0819 18:29:37.485991  460314 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7918,"bootTime":1724084260,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1067-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0819 18:29:37.486100  460314 start.go:139] virtualization:  
	I0819 18:29:37.490427  460314 out.go:177] * [false-003670] minikube v1.33.1 on Ubuntu 20.04 (arm64)
	I0819 18:29:37.493988  460314 notify.go:220] Checking for updates...
	I0819 18:29:37.494578  460314 out.go:177]   - MINIKUBE_LOCATION=19478
	I0819 18:29:37.497302  460314 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0819 18:29:37.499771  460314 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19478-294620/kubeconfig
	I0819 18:29:37.502043  460314 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19478-294620/.minikube
	I0819 18:29:37.504374  460314 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0819 18:29:37.506503  460314 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0819 18:29:37.509643  460314 config.go:182] Loaded profile config "force-systemd-env-759289": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.0
	I0819 18:29:37.509836  460314 driver.go:392] Setting default libvirt URI to qemu:///system
	I0819 18:29:37.554958  460314 docker.go:123] docker version: linux-27.1.2:Docker Engine - Community
	I0819 18:29:37.555130  460314 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0819 18:29:37.635527  460314 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:63 SystemTime:2024-08-19 18:29:37.616809661 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1067-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214904832 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.1.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:8fc6bcff51318944179630522a095cc9dbf9f353 Expected:8fc6bcff51318944179630522a095cc9dbf9f353} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.1]] Warnings:<nil>}}
	I0819 18:29:37.635645  460314 docker.go:307] overlay module found
	I0819 18:29:37.638448  460314 out.go:177] * Using the docker driver based on user configuration
	I0819 18:29:37.645088  460314 start.go:297] selected driver: docker
	I0819 18:29:37.645110  460314 start.go:901] validating driver "docker" against <nil>
	I0819 18:29:37.645124  460314 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0819 18:29:37.650251  460314 out.go:201] 
	W0819 18:29:37.652956  460314 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0819 18:29:37.655108  460314 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-003670 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-003670" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-003670

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-003670"

                                                
                                                
----------------------- debugLogs end: false-003670 [took: 5.15949895s] --------------------------------
helpers_test.go:175: Cleaning up "false-003670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-003670
--- PASS: TestNetworkPlugins/group/false (5.86s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (147.8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1079944323 start -p stopped-upgrade-960031 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1079944323 start -p stopped-upgrade-960031 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m17.374057305s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1079944323 -p stopped-upgrade-960031 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1079944323 -p stopped-upgrade-960031 stop: (19.948416913s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-960031 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0819 18:32:55.070970  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:33:36.486947  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-960031 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (50.481324762s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (147.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-960031
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-960031: (1.204198944s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (56.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0819 18:37:55.070574  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (56.608374111s)
--- PASS: TestNetworkPlugins/group/auto/Start (56.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-003670 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-003670 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2zr6h" [124723d1-4b6a-448f-9e2a-815ea4985d5b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2zr6h" [124723d1-4b6a-448f-9e2a-815ea4985d5b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00438463s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-003670 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (54.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (54.659587758s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (54.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-bnkgq" [c0938c33-baad-472c-8b7f-931771630756] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004360032s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-003670 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-003670 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6l7nq" [75f0c48d-3cb0-4c9f-936b-fc645bae467d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6l7nq" [75f0c48d-3cb0-4c9f-936b-fc645bae467d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.004664527s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-003670 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m7.535994644s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (60.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0819 18:40:58.142594  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m0.170433581s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (60.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-vwjs2" [d77125fb-26aa-4e7d-82e0-caef3ee83725] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004492841s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-003670 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-003670 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h4m8v" [f681ff47-9ab1-4f14-8b5b-5d6f06cd35c7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h4m8v" [f681ff47-9ab1-4f14-8b5b-5d6f06cd35c7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.004184959s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-003670 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-003670 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p6xgp" [bebbd9e1-8a2f-4629-a252-3afdd5f3eab7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p6xgp" [bebbd9e1-8a2f-4629-a252-3afdd5f3eab7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004476535s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-003670 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-003670 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m18.235904239s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (59.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (59.206111406s)
--- PASS: TestNetworkPlugins/group/flannel/Start (59.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-hhsgk" [8d6116fb-e1ed-4740-90bf-14eddb1d6f74] Running
E0819 18:42:55.070450  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004139464s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-003670 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-003670 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-xqkgw" [88d821e4-c32d-4270-bff0-cb95ebee1def] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-xqkgw" [88d821e4-c32d-4270-bff0-cb95ebee1def] Running
E0819 18:43:06.202659  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:06.209132  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:06.220948  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:06.242794  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:06.284207  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:06.365719  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:06.527321  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:06.848616  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:43:07.490835  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003874097s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-003670 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
E0819 18:43:08.773027  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-003670 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-003670 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g8ftk" [2998db27-318b-437b-bd0d-b0dee3ecb845] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0819 18:43:11.335085  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-g8ftk" [2998db27-318b-437b-bd0d-b0dee3ecb845] Running
E0819 18:43:16.456378  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003789149s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-003670 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (81.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E0819 18:43:36.487411  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-003670 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m21.618589752s)
--- PASS: TestNetworkPlugins/group/bridge/Start (81.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (172.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-224334 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 18:43:47.179941  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:28.142440  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:31.310030  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:31.316378  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:31.327752  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:31.349097  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:31.390348  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:31.471723  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:31.633120  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:31.954851  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:32.596583  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:33.878355  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:36.440535  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:41.562162  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:44:51.804475  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-224334 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m52.021069686s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (172.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-003670 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-003670 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-f7glb" [dac14b83-fd1a-4ce3-a0f2-3aaf79e87f56] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-f7glb" [dac14b83-fd1a-4ce3-a0f2-3aaf79e87f56] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004526564s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-003670 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-003670 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (70.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-319392 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 18:45:50.064113  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:45:53.248893  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:07.873850  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:07.880239  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:07.891587  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:07.912984  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:07.954461  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:08.035878  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:08.197305  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:08.518521  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:09.159784  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:10.441819  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:13.004516  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:15.651509  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:15.657850  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:15.669166  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:15.690463  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:15.731797  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:15.813126  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:15.974640  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:16.296276  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:16.937540  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:18.125983  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:18.219498  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:20.781663  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:25.903646  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:46:28.367549  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-319392 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (1m10.849041011s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (70.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-319392 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fc954c4c-8cfb-4fb7-887f-45b0dd28e188] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0819 18:46:36.145167  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [fc954c4c-8cfb-4fb7-887f-45b0dd28e188] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004860115s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-319392 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.79s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-224334 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [c3b539a8-960c-4249-9e04-14a5853a9e66] Pending
helpers_test.go:344: "busybox" [c3b539a8-960c-4249-9e04-14a5853a9e66] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [c3b539a8-960c-4249-9e04-14a5853a9e66] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.005230861s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-224334 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-319392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-319392 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.050259274s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-319392 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-319392 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-319392 --alsologtostderr -v=3: (12.285007368s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-224334 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-224334 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.075586901s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-224334 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-224334 --alsologtostderr -v=3
E0819 18:46:48.850496  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-224334 --alsologtostderr -v=3: (12.040728856s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-319392 -n no-preload-319392
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-319392 -n no-preload-319392: exit status 7 (66.628697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-319392 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (302.73s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-319392 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 18:46:56.627287  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-319392 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (5m2.358449035s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-319392 -n no-preload-319392
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (302.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-224334 -n old-k8s-version-224334
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-224334 -n old-k8s-version-224334: exit status 7 (66.878148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-224334 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (304.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-224334 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0819 18:47:15.170195  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:29.811917  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:37.588620  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:52.034444  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:52.040794  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:52.052163  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:52.073478  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:52.114817  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:52.196836  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:52.358191  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:52.679599  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:53.321157  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:54.603083  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:55.071186  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:47:57.164703  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:02.287137  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:06.203271  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:09.901374  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:09.907812  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:09.919903  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:09.941837  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:09.983302  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:10.064788  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:10.226263  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:10.548013  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:11.189835  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:12.471832  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:12.529284  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:15.033427  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:20.155648  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:30.397865  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:33.010744  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:33.905415  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:36.487422  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:50.879372  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:51.733513  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:48:59.509898  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:13.973102  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:31.309840  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:31.840941  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:53.552443  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:53.558827  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:53.570249  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:53.591748  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:53.633381  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:53.714953  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:53.876512  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:54.198120  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:54.840261  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:56.122459  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:58.683715  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:49:59.012533  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:03.805801  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:14.047841  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:34.529834  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:35.894877  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:50:53.763105  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:07.873453  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:15.492140  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:15.651639  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:35.574855  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:51:43.351340  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-224334 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (5m4.220277806s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-224334 -n old-k8s-version-224334
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (304.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rrpl2" [34d15fc2-ee04-4680-82d0-d024f43b63cd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004301451s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8wv9g" [48bba890-f8e2-4fe7-842c-748d610e5b20] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006048545s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-rrpl2" [34d15fc2-ee04-4680-82d0-d024f43b63cd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004378376s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-319392 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-8wv9g" [48bba890-f8e2-4fe7-842c-748d610e5b20] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006708676s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-224334 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-319392 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-319392 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-319392 -n no-preload-319392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-319392 -n no-preload-319392: exit status 2 (330.426451ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-319392 -n no-preload-319392
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-319392 -n no-preload-319392: exit status 2 (324.495128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-319392 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-319392 -n no-preload-319392
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-319392 -n no-preload-319392
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-224334 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-224334 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p old-k8s-version-224334 --alsologtostderr -v=1: (1.178329809s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-224334 -n old-k8s-version-224334
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-224334 -n old-k8s-version-224334: exit status 2 (387.65835ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-224334 -n old-k8s-version-224334
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-224334 -n old-k8s-version-224334: exit status 2 (375.596583ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-224334 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p old-k8s-version-224334 --alsologtostderr -v=1: (1.043630944s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-224334 -n old-k8s-version-224334
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-224334 -n old-k8s-version-224334
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (57.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-020803 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-020803 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (57.096573187s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (57.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-521273 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 18:52:37.414113  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:52:52.033798  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:52:55.070716  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:53:06.202427  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:53:09.901658  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-521273 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (59.193147925s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (59.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-020803 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [abf83c51-0bb8-4125-9049-3d9bb1636e5f] Pending
helpers_test.go:344: "busybox" [abf83c51-0bb8-4125-9049-3d9bb1636e5f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [abf83c51-0bb8-4125-9049-3d9bb1636e5f] Running
E0819 18:53:19.558890  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:53:19.737230  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003623875s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-020803 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-020803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-020803 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-020803 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-020803 --alsologtostderr -v=3: (12.088600977s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-521273 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [0ac277e6-fbb6-429b-9e1d-cb80a860be57] Pending
helpers_test.go:344: "busybox" [0ac277e6-fbb6-429b-9e1d-cb80a860be57] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [0ac277e6-fbb6-429b-9e1d-cb80a860be57] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004253635s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-521273 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.43s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-521273 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-521273 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-521273 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-521273 --alsologtostderr -v=3: (12.326774159s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-020803 -n embed-certs-020803
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-020803 -n embed-certs-020803: exit status 7 (150.189809ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-020803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (266.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-020803 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 18:53:36.491517  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/addons-726932/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:53:37.605071  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-020803 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m26.382047186s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-020803 -n embed-certs-020803
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (266.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273: exit status 7 (117.975906ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-521273 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-521273 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
E0819 18:54:31.309742  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/kindnet-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:54:53.551992  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:55:21.255428  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/bridge-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:07.874418  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/calico-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:15.651946  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/custom-flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:33.205332  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:33.211743  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:33.223137  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:33.244762  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:33.286082  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:33.367519  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:33.529157  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:33.850866  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:34.492719  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:35.774543  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:36.915879  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:36.922280  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:36.933764  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:36.955276  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:36.996844  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:37.078310  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:37.239843  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:37.561119  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:38.203317  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:38.335811  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:39.485545  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:42.047405  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:43.458178  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:47.169056  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:53.700211  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:56:57.410427  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:57:14.182345  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:57:17.892375  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:57:38.144279  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:57:52.034543  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/flannel-003670/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:57:55.070367  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/functional-557654/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:57:55.144205  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
E0819 18:57:58.854059  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/old-k8s-version-224334/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-521273 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (4m31.612619272s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (272.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lxk4t" [334646aa-7036-463d-b816-676e5f922b6f] Running
E0819 18:58:06.202492  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/auto-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003546433s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-lxk4t" [334646aa-7036-463d-b816-676e5f922b6f] Running
E0819 18:58:09.901974  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/enable-default-cni-003670/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00560026s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-020803 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-020803 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-020803 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-020803 --alsologtostderr -v=1: (1.170615028s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-020803 -n embed-certs-020803
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-020803 -n embed-certs-020803: exit status 2 (456.013059ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-020803 -n embed-certs-020803
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-020803 -n embed-certs-020803: exit status 2 (371.28377ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-020803 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-020803 -n embed-certs-020803
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-020803 -n embed-certs-020803
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w5pmk" [27ae7f6a-c1bc-4770-8ac8-295fbdcc9d93] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004336034s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (36.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-673417 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-673417 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (36.246441017s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (36.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-w5pmk" [27ae7f6a-c1bc-4770-8ac8-295fbdcc9d93] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004867234s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-521273 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-521273 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-521273 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-521273 --alsologtostderr -v=1: (1.008884952s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273: exit status 2 (386.726201ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273: exit status 2 (406.32511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-521273 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-521273 -n default-k8s-diff-port-521273
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-673417 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-673417 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.279037223s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-673417 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-673417 --alsologtostderr -v=3: (1.220342042s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-673417 -n newest-cni-673417
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-673417 -n newest-cni-673417: exit status 7 (68.162926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-673417 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-673417 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-673417 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.0: (15.506140242s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-673417 -n newest-cni-673417
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.84s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-673417 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240730-75a5af0c
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-673417 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-673417 -n newest-cni-673417
E0819 18:59:17.065813  300020 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19478-294620/.minikube/profiles/no-preload-319392/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-673417 -n newest-cni-673417: exit status 2 (316.793437ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-673417 -n newest-cni-673417
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-673417 -n newest-cni-673417: exit status 2 (314.249843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-673417 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-673417 -n newest-cni-673417
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-673417 -n newest-cni-673417
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.96s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-696457 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-696457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-696457
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-003670 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-003670" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-003670

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-003670"

                                                
                                                
----------------------- debugLogs end: kubenet-003670 [took: 4.097748183s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-003670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-003670
--- SKIP: TestNetworkPlugins/group/kubenet (4.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-003670 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-003670" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-003670

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-003670" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-003670"

                                                
                                                
----------------------- debugLogs end: cilium-003670 [took: 4.985378684s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-003670" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-003670
--- SKIP: TestNetworkPlugins/group/cilium (5.18s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-809973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-809973
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard