Test Report: Docker_Linux_containerd_arm64 19644

                    
                      c0eea096ace35e11d6c690a668e6718dc1bec60e:2024-09-15:36219
                    
                

Test fail (1/328)

Order failed test Duration
29 TestAddons/serial/Volcano 199.93
x
+
TestAddons/serial/Volcano (199.93s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:913: volcano-controller stabilized in 62.729939ms
addons_test.go:905: volcano-admission stabilized in 62.818397ms
addons_test.go:897: volcano-scheduler stabilized in 62.869801ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-knjtl" [ec47b71a-f178-4369-9d3d-0477736c6c43] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.013270644s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-44hlz" [2d3134cb-2fa1-4461-9fdf-0780930f0c06] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003586155s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-jx4lp" [f9c0ec21-5c53-48c2-a1e7-a8ff761837bf] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003633388s
addons_test.go:932: (dbg) Run:  kubectl --context addons-686490 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-686490 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-686490 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [4fe03932-d62f-4a75-a1d2-fdf4333cfbab] Pending
helpers_test.go:344: "test-job-nginx-0" [4fe03932-d62f-4a75-a1d2-fdf4333cfbab] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-686490 -n addons-686490
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-15 06:42:49.952051234 +0000 UTC m=+434.273733453
addons_test.go:964: (dbg) Run:  kubectl --context addons-686490 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-686490 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-c28da52d-5e88-4555-8398-cc4e2072c672
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c4m9l (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-c4m9l:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m59s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-686490 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-686490 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-686490
helpers_test.go:235: (dbg) docker inspect addons-686490:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f9a5ff454a0a2e7b9aff6d320e640af5427dab59c396a5396036077828d08cb",
	        "Created": "2024-09-15T06:36:19.109593731Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 3199906,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-15T06:36:19.276322376Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1b71fa87733590eb4674b16f6945626ae533f3af37066893e3fd70eb9476268",
	        "ResolvConfPath": "/var/lib/docker/containers/1f9a5ff454a0a2e7b9aff6d320e640af5427dab59c396a5396036077828d08cb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f9a5ff454a0a2e7b9aff6d320e640af5427dab59c396a5396036077828d08cb/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f9a5ff454a0a2e7b9aff6d320e640af5427dab59c396a5396036077828d08cb/hosts",
	        "LogPath": "/var/lib/docker/containers/1f9a5ff454a0a2e7b9aff6d320e640af5427dab59c396a5396036077828d08cb/1f9a5ff454a0a2e7b9aff6d320e640af5427dab59c396a5396036077828d08cb-json.log",
	        "Name": "/addons-686490",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-686490:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-686490",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/239b2e9109a9b7aff528d2eb808cfdbe58534dcecceb763bff9aa1805b2c0e9a-init/diff:/var/lib/docker/overlay2/31eb295c1996517842adc8af440314d53294837c66bc19c5926f12a15defbe5c/diff",
	                "MergedDir": "/var/lib/docker/overlay2/239b2e9109a9b7aff528d2eb808cfdbe58534dcecceb763bff9aa1805b2c0e9a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/239b2e9109a9b7aff528d2eb808cfdbe58534dcecceb763bff9aa1805b2c0e9a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/239b2e9109a9b7aff528d2eb808cfdbe58534dcecceb763bff9aa1805b2c0e9a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-686490",
	                "Source": "/var/lib/docker/volumes/addons-686490/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-686490",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-686490",
	                "name.minikube.sigs.k8s.io": "addons-686490",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69d7677abbff9eda5c41ad738f95ffe1cd25772931f29309088b19dc6bfb525b",
	            "SandboxKey": "/var/run/docker/netns/69d7677abbff",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35877"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35878"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35881"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35879"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35880"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-686490": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "44dcf73096248b90f025ee503af4f9fa367d8a59cef3aede0620f58fed5d1b32",
	                    "EndpointID": "a74c9b68cce2d49e63924ece89a93d60a37e5607ac57ccbae699595dbb305f5a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-686490",
	                        "1f9a5ff454a0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-686490 -n addons-686490
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-686490 logs -n 25: (1.62523734s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-808336   | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |                     |
	|         | -p download-only-808336              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| delete  | -p download-only-808336              | download-only-808336   | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| start   | -o=json --download-only              | download-only-841381   | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |                     |
	|         | -p download-only-841381              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| delete  | -p download-only-841381              | download-only-841381   | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| delete  | -p download-only-808336              | download-only-808336   | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| delete  | -p download-only-841381              | download-only-841381   | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| start   | --download-only -p                   | download-docker-343356 | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |                     |
	|         | download-docker-343356               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-343356            | download-docker-343356 | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| start   | --download-only -p                   | binary-mirror-625101   | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |                     |
	|         | binary-mirror-625101                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:36845               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-625101              | binary-mirror-625101   | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| addons  | disable dashboard -p                 | addons-686490          | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |                     |
	|         | addons-686490                        |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                  | addons-686490          | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |                     |
	|         | addons-686490                        |                        |         |         |                     |                     |
	| start   | -p addons-686490 --wait=true         | addons-686490          | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:39 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:35:55
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:35:55.162943 3199412 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:35:55.163090 3199412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:35:55.163101 3199412 out.go:358] Setting ErrFile to fd 2...
	I0915 06:35:55.163106 3199412 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:35:55.163394 3199412 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 06:35:55.163917 3199412 out.go:352] Setting JSON to false
	I0915 06:35:55.164838 3199412 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":51507,"bootTime":1726330649,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0915 06:35:55.164923 3199412 start.go:139] virtualization:  
	I0915 06:35:55.168268 3199412 out.go:177] * [addons-686490] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:35:55.171860 3199412 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:35:55.171970 3199412 notify.go:220] Checking for updates...
	I0915 06:35:55.177538 3199412 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:35:55.180256 3199412 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	I0915 06:35:55.182813 3199412 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	I0915 06:35:55.185375 3199412 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:35:55.188067 3199412 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:35:55.190934 3199412 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:35:55.210946 3199412 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:35:55.211123 3199412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:35:55.276210 3199412 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:35:55.266646843 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:35:55.276319 3199412 docker.go:318] overlay module found
	I0915 06:35:55.279275 3199412 out.go:177] * Using the docker driver based on user configuration
	I0915 06:35:55.281904 3199412 start.go:297] selected driver: docker
	I0915 06:35:55.281929 3199412 start.go:901] validating driver "docker" against <nil>
	I0915 06:35:55.281942 3199412 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:35:55.282599 3199412 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:35:55.336016 3199412 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:35:55.324254119 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:35:55.336241 3199412 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:35:55.336474 3199412 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:35:55.339131 3199412 out.go:177] * Using Docker driver with root privileges
	I0915 06:35:55.341749 3199412 cni.go:84] Creating CNI manager for ""
	I0915 06:35:55.341822 3199412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0915 06:35:55.341835 3199412 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:35:55.341918 3199412 start.go:340] cluster config:
	{Name:addons-686490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-686490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:35:55.344898 3199412 out.go:177] * Starting "addons-686490" primary control-plane node in "addons-686490" cluster
	I0915 06:35:55.347476 3199412 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0915 06:35:55.350132 3199412 out.go:177] * Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:35:55.352791 3199412 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0915 06:35:55.352853 3199412 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0915 06:35:55.352867 3199412 cache.go:56] Caching tarball of preloaded images
	I0915 06:35:55.352870 3199412 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:35:55.352948 3199412 preload.go:172] Found /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0915 06:35:55.352959 3199412 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0915 06:35:55.353323 3199412 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/config.json ...
	I0915 06:35:55.353354 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/config.json: {Name:mk462ad3fc7677a63e6504c2fa2566fa05d70167 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:35:55.367746 3199412 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:35:55.367874 3199412 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:35:55.367895 3199412 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:35:55.367900 3199412 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:35:55.367908 3199412 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:35:55.367913 3199412 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from local cache
	I0915 06:36:12.641988 3199412 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 from cached tarball
	I0915 06:36:12.642025 3199412 cache.go:194] Successfully downloaded all kic artifacts
	I0915 06:36:12.642055 3199412 start.go:360] acquireMachinesLock for addons-686490: {Name:mk95927a99e0bbbb21f63ca024d75271d0b9453c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0915 06:36:12.642190 3199412 start.go:364] duration metric: took 107.953µs to acquireMachinesLock for "addons-686490"
	I0915 06:36:12.642220 3199412 start.go:93] Provisioning new machine with config: &{Name:addons-686490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-686490 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0915 06:36:12.642312 3199412 start.go:125] createHost starting for "" (driver="docker")
	I0915 06:36:12.645457 3199412 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0915 06:36:12.645694 3199412 start.go:159] libmachine.API.Create for "addons-686490" (driver="docker")
	I0915 06:36:12.645729 3199412 client.go:168] LocalClient.Create starting
	I0915 06:36:12.645848 3199412 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca.pem
	I0915 06:36:12.993891 3199412 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/cert.pem
	I0915 06:36:13.363259 3199412 cli_runner.go:164] Run: docker network inspect addons-686490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0915 06:36:13.379971 3199412 cli_runner.go:211] docker network inspect addons-686490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0915 06:36:13.380082 3199412 network_create.go:284] running [docker network inspect addons-686490] to gather additional debugging logs...
	I0915 06:36:13.380107 3199412 cli_runner.go:164] Run: docker network inspect addons-686490
	W0915 06:36:13.395357 3199412 cli_runner.go:211] docker network inspect addons-686490 returned with exit code 1
	I0915 06:36:13.395391 3199412 network_create.go:287] error running [docker network inspect addons-686490]: docker network inspect addons-686490: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-686490 not found
	I0915 06:36:13.395405 3199412 network_create.go:289] output of [docker network inspect addons-686490]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-686490 not found
	
	** /stderr **
	I0915 06:36:13.395507 3199412 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:36:13.411093 3199412 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001c05be0}
	I0915 06:36:13.411134 3199412 network_create.go:124] attempt to create docker network addons-686490 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0915 06:36:13.411189 3199412 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-686490 addons-686490
	I0915 06:36:13.483066 3199412 network_create.go:108] docker network addons-686490 192.168.49.0/24 created
	I0915 06:36:13.483102 3199412 kic.go:121] calculated static IP "192.168.49.2" for the "addons-686490" container
	I0915 06:36:13.483189 3199412 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0915 06:36:13.499412 3199412 cli_runner.go:164] Run: docker volume create addons-686490 --label name.minikube.sigs.k8s.io=addons-686490 --label created_by.minikube.sigs.k8s.io=true
	I0915 06:36:13.515791 3199412 oci.go:103] Successfully created a docker volume addons-686490
	I0915 06:36:13.515897 3199412 cli_runner.go:164] Run: docker run --rm --name addons-686490-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-686490 --entrypoint /usr/bin/test -v addons-686490:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib
	I0915 06:36:15.018875 3199412 cli_runner.go:217] Completed: docker run --rm --name addons-686490-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-686490 --entrypoint /usr/bin/test -v addons-686490:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -d /var/lib: (1.50293235s)
	I0915 06:36:15.018917 3199412 oci.go:107] Successfully prepared a docker volume addons-686490
	I0915 06:36:15.018958 3199412 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0915 06:36:15.018986 3199412 kic.go:194] Starting extracting preloaded images to volume ...
	I0915 06:36:15.019123 3199412 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-686490:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir
	I0915 06:36:19.037611 3199412 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-686490:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 -I lz4 -xf /preloaded.tar -C /extractDir: (4.018436823s)
	I0915 06:36:19.037644 3199412 kic.go:203] duration metric: took 4.018655313s to extract preloaded images to volume ...
	W0915 06:36:19.037791 3199412 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0915 06:36:19.037907 3199412 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0915 06:36:19.093311 3199412 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-686490 --name addons-686490 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-686490 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-686490 --network addons-686490 --ip 192.168.49.2 --volume addons-686490:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0
	I0915 06:36:19.454017 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Running}}
	I0915 06:36:19.482503 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:19.505659 3199412 cli_runner.go:164] Run: docker exec addons-686490 stat /var/lib/dpkg/alternatives/iptables
	I0915 06:36:19.574722 3199412 oci.go:144] the created container "addons-686490" has a running status.
	I0915 06:36:19.574755 3199412 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa...
	I0915 06:36:19.889289 3199412 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0915 06:36:19.924850 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:19.947275 3199412 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0915 06:36:19.947295 3199412 kic_runner.go:114] Args: [docker exec --privileged addons-686490 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0915 06:36:20.046003 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:20.077172 3199412 machine.go:93] provisionDockerMachine start ...
	I0915 06:36:20.077269 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:20.112397 3199412 main.go:141] libmachine: Using SSH client type: native
	I0915 06:36:20.114699 3199412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35877 <nil> <nil>}
	I0915 06:36:20.114722 3199412 main.go:141] libmachine: About to run SSH command:
	hostname
	I0915 06:36:20.294457 3199412 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-686490
	
	I0915 06:36:20.294533 3199412 ubuntu.go:169] provisioning hostname "addons-686490"
	I0915 06:36:20.294633 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:20.319211 3199412 main.go:141] libmachine: Using SSH client type: native
	I0915 06:36:20.319456 3199412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35877 <nil> <nil>}
	I0915 06:36:20.319475 3199412 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-686490 && echo "addons-686490" | sudo tee /etc/hostname
	I0915 06:36:20.477357 3199412 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-686490
	
	I0915 06:36:20.477480 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:20.497750 3199412 main.go:141] libmachine: Using SSH client type: native
	I0915 06:36:20.498096 3199412 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 35877 <nil> <nil>}
	I0915 06:36:20.498141 3199412 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-686490' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-686490/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-686490' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0915 06:36:20.635060 3199412 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0915 06:36:20.635099 3199412 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19644-3193270/.minikube CaCertPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19644-3193270/.minikube}
	I0915 06:36:20.635141 3199412 ubuntu.go:177] setting up certificates
	I0915 06:36:20.635153 3199412 provision.go:84] configureAuth start
	I0915 06:36:20.635218 3199412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-686490
	I0915 06:36:20.652033 3199412 provision.go:143] copyHostCerts
	I0915 06:36:20.652120 3199412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.pem (1078 bytes)
	I0915 06:36:20.652245 3199412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19644-3193270/.minikube/cert.pem (1123 bytes)
	I0915 06:36:20.652335 3199412 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19644-3193270/.minikube/key.pem (1675 bytes)
	I0915 06:36:20.652384 3199412 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19644-3193270/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca-key.pem org=jenkins.addons-686490 san=[127.0.0.1 192.168.49.2 addons-686490 localhost minikube]
	I0915 06:36:21.250219 3199412 provision.go:177] copyRemoteCerts
	I0915 06:36:21.250295 3199412 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0915 06:36:21.250337 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:21.266602 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:21.367995 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0915 06:36:21.392431 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0915 06:36:21.416774 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0915 06:36:21.440868 3199412 provision.go:87] duration metric: took 805.700034ms to configureAuth
	I0915 06:36:21.440909 3199412 ubuntu.go:193] setting minikube options for container-runtime
	I0915 06:36:21.441100 3199412 config.go:182] Loaded profile config "addons-686490": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 06:36:21.441108 3199412 machine.go:96] duration metric: took 1.363917653s to provisionDockerMachine
	I0915 06:36:21.441115 3199412 client.go:171] duration metric: took 8.795375568s to LocalClient.Create
	I0915 06:36:21.441129 3199412 start.go:167] duration metric: took 8.795435496s to libmachine.API.Create "addons-686490"
	I0915 06:36:21.441137 3199412 start.go:293] postStartSetup for "addons-686490" (driver="docker")
	I0915 06:36:21.441146 3199412 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0915 06:36:21.441195 3199412 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0915 06:36:21.441237 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:21.457953 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:21.556211 3199412 ssh_runner.go:195] Run: cat /etc/os-release
	I0915 06:36:21.559313 3199412 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0915 06:36:21.559350 3199412 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0915 06:36:21.559363 3199412 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0915 06:36:21.559370 3199412 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0915 06:36:21.559381 3199412 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-3193270/.minikube/addons for local assets ...
	I0915 06:36:21.559446 3199412 filesync.go:126] Scanning /home/jenkins/minikube-integration/19644-3193270/.minikube/files for local assets ...
	I0915 06:36:21.559471 3199412 start.go:296] duration metric: took 118.327766ms for postStartSetup
	I0915 06:36:21.559802 3199412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-686490
	I0915 06:36:21.575998 3199412 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/config.json ...
	I0915 06:36:21.576286 3199412 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:36:21.576354 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:21.592946 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:21.688272 3199412 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0915 06:36:21.693516 3199412 start.go:128] duration metric: took 9.051179059s to createHost
	I0915 06:36:21.693550 3199412 start.go:83] releasing machines lock for "addons-686490", held for 9.051346753s
	I0915 06:36:21.693637 3199412 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-686490
	I0915 06:36:21.709974 3199412 ssh_runner.go:195] Run: cat /version.json
	I0915 06:36:21.709994 3199412 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0915 06:36:21.710031 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:21.710069 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:21.727838 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:21.744589 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:21.954064 3199412 ssh_runner.go:195] Run: systemctl --version
	I0915 06:36:21.958940 3199412 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0915 06:36:21.963797 3199412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0915 06:36:21.989471 3199412 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0915 06:36:21.989564 3199412 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0915 06:36:22.023080 3199412 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0915 06:36:22.023106 3199412 start.go:495] detecting cgroup driver to use...
	I0915 06:36:22.023140 3199412 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0915 06:36:22.023195 3199412 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0915 06:36:22.036681 3199412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0915 06:36:22.048993 3199412 docker.go:217] disabling cri-docker service (if available) ...
	I0915 06:36:22.049083 3199412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0915 06:36:22.063922 3199412 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0915 06:36:22.079727 3199412 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0915 06:36:22.174958 3199412 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0915 06:36:22.269859 3199412 docker.go:233] disabling docker service ...
	I0915 06:36:22.270009 3199412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0915 06:36:22.291114 3199412 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0915 06:36:22.303757 3199412 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0915 06:36:22.396635 3199412 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0915 06:36:22.479542 3199412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0915 06:36:22.490982 3199412 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0915 06:36:22.507503 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0915 06:36:22.517746 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0915 06:36:22.528525 3199412 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0915 06:36:22.528640 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0915 06:36:22.538737 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:36:22.548786 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0915 06:36:22.558578 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0915 06:36:22.568989 3199412 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0915 06:36:22.578565 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0915 06:36:22.588729 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0915 06:36:22.598969 3199412 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0915 06:36:22.609351 3199412 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0915 06:36:22.618468 3199412 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0915 06:36:22.627237 3199412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:36:22.716291 3199412 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0915 06:36:22.850209 3199412 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0915 06:36:22.850322 3199412 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0915 06:36:22.853872 3199412 start.go:563] Will wait 60s for crictl version
	I0915 06:36:22.853961 3199412 ssh_runner.go:195] Run: which crictl
	I0915 06:36:22.857160 3199412 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0915 06:36:22.895250 3199412 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0915 06:36:22.895385 3199412 ssh_runner.go:195] Run: containerd --version
	I0915 06:36:22.919008 3199412 ssh_runner.go:195] Run: containerd --version
	I0915 06:36:22.944873 3199412 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0915 06:36:22.947646 3199412 cli_runner.go:164] Run: docker network inspect addons-686490 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0915 06:36:22.963496 3199412 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0915 06:36:22.967771 3199412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:36:22.979038 3199412 kubeadm.go:883] updating cluster {Name:addons-686490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-686490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0915 06:36:22.979165 3199412 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0915 06:36:22.979232 3199412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:36:23.018819 3199412 containerd.go:627] all images are preloaded for containerd runtime.
	I0915 06:36:23.018845 3199412 containerd.go:534] Images already preloaded, skipping extraction
	I0915 06:36:23.018909 3199412 ssh_runner.go:195] Run: sudo crictl images --output json
	I0915 06:36:23.055049 3199412 containerd.go:627] all images are preloaded for containerd runtime.
	I0915 06:36:23.055084 3199412 cache_images.go:84] Images are preloaded, skipping loading
	I0915 06:36:23.055093 3199412 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0915 06:36:23.055204 3199412 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-686490 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-686490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0915 06:36:23.055282 3199412 ssh_runner.go:195] Run: sudo crictl info
	I0915 06:36:23.094973 3199412 cni.go:84] Creating CNI manager for ""
	I0915 06:36:23.095020 3199412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0915 06:36:23.095030 3199412 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0915 06:36:23.095052 3199412 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-686490 NodeName:addons-686490 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0915 06:36:23.095222 3199412 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-686490"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0915 06:36:23.095299 3199412 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0915 06:36:23.104320 3199412 binaries.go:44] Found k8s binaries, skipping transfer
	I0915 06:36:23.104414 3199412 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0915 06:36:23.113622 3199412 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0915 06:36:23.132559 3199412 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0915 06:36:23.151947 3199412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0915 06:36:23.170803 3199412 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0915 06:36:23.174433 3199412 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0915 06:36:23.185683 3199412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:36:23.263882 3199412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:36:23.280263 3199412 certs.go:68] Setting up /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490 for IP: 192.168.49.2
	I0915 06:36:23.280286 3199412 certs.go:194] generating shared ca certs ...
	I0915 06:36:23.280303 3199412 certs.go:226] acquiring lock for ca certs: {Name:mkc5aa334004e490b788a5943d3511d48a9686f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:23.280506 3199412 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.key
	I0915 06:36:23.533448 3199412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.crt ...
	I0915 06:36:23.533484 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.crt: {Name:mk8c5bcc6d387c0995e66c465194ec38cc2fc78b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:23.533724 3199412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.key ...
	I0915 06:36:23.533742 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.key: {Name:mk617a533adc7a69653ceac554fef5dc5eb56612 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:23.533835 3199412 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19644-3193270/.minikube/proxy-client-ca.key
	I0915 06:36:24.450602 3199412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-3193270/.minikube/proxy-client-ca.crt ...
	I0915 06:36:24.450639 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/proxy-client-ca.crt: {Name:mk2d16284eb79286c5f48203ba6efd982120c687 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:24.450876 3199412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-3193270/.minikube/proxy-client-ca.key ...
	I0915 06:36:24.450892 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/proxy-client-ca.key: {Name:mk9b91f8fa16c02ff52b7efb51b73f2191560022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:24.450983 3199412 certs.go:256] generating profile certs ...
	I0915 06:36:24.451060 3199412 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.key
	I0915 06:36:24.451078 3199412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt with IP's: []
	I0915 06:36:24.878216 3199412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt ...
	I0915 06:36:24.878249 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: {Name:mk71b5c4b888e333ab908b6953d9c7918d7a52de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:24.878440 3199412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.key ...
	I0915 06:36:24.878453 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.key: {Name:mkb2ce20edb428a6c8bb4609eb5d91b648eec10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:24.878544 3199412 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.key.b8fa7149
	I0915 06:36:24.878565 3199412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.crt.b8fa7149 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0915 06:36:25.124173 3199412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.crt.b8fa7149 ...
	I0915 06:36:25.124204 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.crt.b8fa7149: {Name:mked7f648f4b66bea6079a372c63718ac814df1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:25.124392 3199412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.key.b8fa7149 ...
	I0915 06:36:25.124406 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.key.b8fa7149: {Name:mk565124f2e972667616f8192df071c2806a4366 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:25.124953 3199412 certs.go:381] copying /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.crt.b8fa7149 -> /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.crt
	I0915 06:36:25.125059 3199412 certs.go:385] copying /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.key.b8fa7149 -> /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.key
	I0915 06:36:25.125116 3199412 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/proxy-client.key
	I0915 06:36:25.125136 3199412 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/proxy-client.crt with IP's: []
	I0915 06:36:25.550898 3199412 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/proxy-client.crt ...
	I0915 06:36:25.550931 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/proxy-client.crt: {Name:mkebea17640fe08d37ae06c791d282a8837e0135 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:25.551628 3199412 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/proxy-client.key ...
	I0915 06:36:25.551651 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/proxy-client.key: {Name:mkfff90ab830b98413919696ccacda6e2c5ae6f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:25.551859 3199412 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca-key.pem (1675 bytes)
	I0915 06:36:25.551906 3199412 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/ca.pem (1078 bytes)
	I0915 06:36:25.551935 3199412 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/cert.pem (1123 bytes)
	I0915 06:36:25.551961 3199412 certs.go:484] found cert: /home/jenkins/minikube-integration/19644-3193270/.minikube/certs/key.pem (1675 bytes)
	I0915 06:36:25.552553 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0915 06:36:25.580790 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0915 06:36:25.607955 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0915 06:36:25.632000 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0915 06:36:25.656525 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0915 06:36:25.681173 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0915 06:36:25.706163 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0915 06:36:25.730450 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0915 06:36:25.754931 3199412 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19644-3193270/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0915 06:36:25.779803 3199412 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0915 06:36:25.798501 3199412 ssh_runner.go:195] Run: openssl version
	I0915 06:36:25.804394 3199412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0915 06:36:25.814105 3199412 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:36:25.817903 3199412 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 15 06:36 /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:36:25.818009 3199412 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0915 06:36:25.825099 3199412 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0915 06:36:25.834381 3199412 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0915 06:36:25.837764 3199412 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0915 06:36:25.837813 3199412 kubeadm.go:392] StartCluster: {Name:addons-686490 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-686490 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:36:25.837894 3199412 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0915 06:36:25.837951 3199412 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0915 06:36:25.885007 3199412 cri.go:89] found id: ""
	I0915 06:36:25.885124 3199412 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0915 06:36:25.894127 3199412 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0915 06:36:25.902986 3199412 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0915 06:36:25.903130 3199412 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0915 06:36:25.912456 3199412 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0915 06:36:25.912518 3199412 kubeadm.go:157] found existing configuration files:
	
	I0915 06:36:25.912597 3199412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0915 06:36:25.921423 3199412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0915 06:36:25.921493 3199412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0915 06:36:25.930165 3199412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0915 06:36:25.939023 3199412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0915 06:36:25.939122 3199412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0915 06:36:25.947842 3199412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0915 06:36:25.956579 3199412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0915 06:36:25.956647 3199412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0915 06:36:25.965182 3199412 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0915 06:36:25.973961 3199412 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0915 06:36:25.974041 3199412 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0915 06:36:25.982660 3199412 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0915 06:36:26.031307 3199412 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0915 06:36:26.031488 3199412 kubeadm.go:310] [preflight] Running pre-flight checks
	I0915 06:36:26.051122 3199412 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0915 06:36:26.051201 3199412 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-aws
	I0915 06:36:26.051243 3199412 kubeadm.go:310] OS: Linux
	I0915 06:36:26.051293 3199412 kubeadm.go:310] CGROUPS_CPU: enabled
	I0915 06:36:26.051347 3199412 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0915 06:36:26.051398 3199412 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0915 06:36:26.051449 3199412 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0915 06:36:26.051500 3199412 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0915 06:36:26.051553 3199412 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0915 06:36:26.051608 3199412 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0915 06:36:26.051663 3199412 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0915 06:36:26.051713 3199412 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0915 06:36:26.114661 3199412 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0915 06:36:26.114809 3199412 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0915 06:36:26.114964 3199412 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0915 06:36:26.123504 3199412 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0915 06:36:26.127232 3199412 out.go:235]   - Generating certificates and keys ...
	I0915 06:36:26.127428 3199412 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0915 06:36:26.127544 3199412 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0915 06:36:26.605064 3199412 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0915 06:36:27.252102 3199412 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0915 06:36:27.669423 3199412 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0915 06:36:28.092976 3199412 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0915 06:36:29.202487 3199412 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0915 06:36:29.202857 3199412 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-686490 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:36:29.925882 3199412 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0915 06:36:29.926025 3199412 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-686490 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0915 06:36:30.170486 3199412 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0915 06:36:30.504791 3199412 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0915 06:36:30.993883 3199412 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0915 06:36:30.994103 3199412 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0915 06:36:31.592272 3199412 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0915 06:36:31.856982 3199412 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0915 06:36:32.850572 3199412 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0915 06:36:33.218965 3199412 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0915 06:36:34.541551 3199412 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0915 06:36:34.542278 3199412 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0915 06:36:34.548354 3199412 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0915 06:36:34.551743 3199412 out.go:235]   - Booting up control plane ...
	I0915 06:36:34.551850 3199412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0915 06:36:34.551926 3199412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0915 06:36:34.553248 3199412 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0915 06:36:34.568311 3199412 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0915 06:36:34.575437 3199412 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0915 06:36:34.575515 3199412 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0915 06:36:34.670748 3199412 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0915 06:36:34.670876 3199412 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0915 06:36:36.172139 3199412 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501724228s
	I0915 06:36:36.172243 3199412 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0915 06:36:42.674417 3199412 kubeadm.go:310] [api-check] The API server is healthy after 6.502233047s
	I0915 06:36:42.693697 3199412 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0915 06:36:42.708926 3199412 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0915 06:36:42.735251 3199412 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0915 06:36:42.735461 3199412 kubeadm.go:310] [mark-control-plane] Marking the node addons-686490 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0915 06:36:42.754046 3199412 kubeadm.go:310] [bootstrap-token] Using token: awdtln.0vp8q231ymmsqg1u
	I0915 06:36:42.756834 3199412 out.go:235]   - Configuring RBAC rules ...
	I0915 06:36:42.756967 3199412 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0915 06:36:42.761513 3199412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0915 06:36:42.769627 3199412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0915 06:36:42.773283 3199412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0915 06:36:42.779208 3199412 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0915 06:36:42.783628 3199412 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0915 06:36:43.081967 3199412 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0915 06:36:43.507536 3199412 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0915 06:36:44.081819 3199412 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0915 06:36:44.082981 3199412 kubeadm.go:310] 
	I0915 06:36:44.083123 3199412 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0915 06:36:44.083138 3199412 kubeadm.go:310] 
	I0915 06:36:44.083216 3199412 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0915 06:36:44.083225 3199412 kubeadm.go:310] 
	I0915 06:36:44.083251 3199412 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0915 06:36:44.083316 3199412 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0915 06:36:44.083370 3199412 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0915 06:36:44.083378 3199412 kubeadm.go:310] 
	I0915 06:36:44.083432 3199412 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0915 06:36:44.083440 3199412 kubeadm.go:310] 
	I0915 06:36:44.083487 3199412 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0915 06:36:44.083503 3199412 kubeadm.go:310] 
	I0915 06:36:44.083555 3199412 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0915 06:36:44.083639 3199412 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0915 06:36:44.083749 3199412 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0915 06:36:44.083758 3199412 kubeadm.go:310] 
	I0915 06:36:44.083842 3199412 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0915 06:36:44.083923 3199412 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0915 06:36:44.083932 3199412 kubeadm.go:310] 
	I0915 06:36:44.084017 3199412 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token awdtln.0vp8q231ymmsqg1u \
	I0915 06:36:44.084123 3199412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40c7cf320af28954d47c6aad20f8c848672babdad0b1d523c7f9098bf692068a \
	I0915 06:36:44.084148 3199412 kubeadm.go:310] 	--control-plane 
	I0915 06:36:44.084156 3199412 kubeadm.go:310] 
	I0915 06:36:44.084239 3199412 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0915 06:36:44.084248 3199412 kubeadm.go:310] 
	I0915 06:36:44.084329 3199412 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token awdtln.0vp8q231ymmsqg1u \
	I0915 06:36:44.084435 3199412 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:40c7cf320af28954d47c6aad20f8c848672babdad0b1d523c7f9098bf692068a 
	I0915 06:36:44.088624 3199412 kubeadm.go:310] W0915 06:36:26.026897    1022 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:36:44.088925 3199412 kubeadm.go:310] W0915 06:36:26.028819    1022 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0915 06:36:44.089148 3199412 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-aws\n", err: exit status 1
	I0915 06:36:44.089259 3199412 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0915 06:36:44.089280 3199412 cni.go:84] Creating CNI manager for ""
	I0915 06:36:44.089288 3199412 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0915 06:36:44.092388 3199412 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0915 06:36:44.095089 3199412 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0915 06:36:44.099345 3199412 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0915 06:36:44.099375 3199412 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0915 06:36:44.119122 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0915 06:36:44.396065 3199412 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0915 06:36:44.396189 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:44.396262 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-686490 minikube.k8s.io/updated_at=2024_09_15T06_36_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a minikube.k8s.io/name=addons-686490 minikube.k8s.io/primary=true
	I0915 06:36:44.601073 3199412 ops.go:34] apiserver oom_adj: -16
	I0915 06:36:44.601200 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:45.102155 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:45.601988 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:46.102256 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:46.601837 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:47.102151 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:47.601876 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:48.102040 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:48.601426 3199412 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0915 06:36:48.849229 3199412 kubeadm.go:1113] duration metric: took 4.453081573s to wait for elevateKubeSystemPrivileges
	I0915 06:36:48.849258 3199412 kubeadm.go:394] duration metric: took 23.011448226s to StartCluster
	I0915 06:36:48.849277 3199412 settings.go:142] acquiring lock: {Name:mkca7a8986bccd67741f09577b65c1a9eb63fddc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:48.849389 3199412 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19644-3193270/kubeconfig
	I0915 06:36:48.849769 3199412 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/kubeconfig: {Name:mkb6e748827e252b3291ddf224082d480f12d063 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:36:48.849972 3199412 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0915 06:36:48.850114 3199412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0915 06:36:48.850396 3199412 config.go:182] Loaded profile config "addons-686490": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 06:36:48.850338 3199412 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0915 06:36:48.850433 3199412 addons.go:69] Setting cloud-spanner=true in profile "addons-686490"
	I0915 06:36:48.850448 3199412 addons.go:234] Setting addon cloud-spanner=true in "addons-686490"
	I0915 06:36:48.850459 3199412 addons.go:69] Setting yakd=true in profile "addons-686490"
	I0915 06:36:48.850470 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.850476 3199412 addons.go:234] Setting addon yakd=true in "addons-686490"
	I0915 06:36:48.850508 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.850924 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.851175 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.851691 3199412 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-686490"
	I0915 06:36:48.851738 3199412 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-686490"
	I0915 06:36:48.851765 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.852194 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.857674 3199412 addons.go:69] Setting default-storageclass=true in profile "addons-686490"
	I0915 06:36:48.857713 3199412 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-686490"
	I0915 06:36:48.858075 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.858292 3199412 out.go:177] * Verifying Kubernetes components...
	I0915 06:36:48.859918 3199412 addons.go:69] Setting registry=true in profile "addons-686490"
	I0915 06:36:48.859945 3199412 addons.go:234] Setting addon registry=true in "addons-686490"
	I0915 06:36:48.859977 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.861112 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.871123 3199412 addons.go:69] Setting gcp-auth=true in profile "addons-686490"
	I0915 06:36:48.871164 3199412 mustload.go:65] Loading cluster: addons-686490
	I0915 06:36:48.871349 3199412 config.go:182] Loaded profile config "addons-686490": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 06:36:48.871624 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.875127 3199412 addons.go:69] Setting storage-provisioner=true in profile "addons-686490"
	I0915 06:36:48.877817 3199412 addons.go:234] Setting addon storage-provisioner=true in "addons-686490"
	I0915 06:36:48.878000 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.878787 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.889155 3199412 addons.go:69] Setting ingress=true in profile "addons-686490"
	I0915 06:36:48.889283 3199412 addons.go:234] Setting addon ingress=true in "addons-686490"
	I0915 06:36:48.889400 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.890208 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.877420 3199412 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-686490"
	I0915 06:36:48.916858 3199412 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-686490"
	I0915 06:36:48.917265 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.919339 3199412 addons.go:69] Setting ingress-dns=true in profile "addons-686490"
	I0915 06:36:48.919426 3199412 addons.go:234] Setting addon ingress-dns=true in "addons-686490"
	I0915 06:36:48.919502 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.920148 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.877440 3199412 addons.go:69] Setting volcano=true in profile "addons-686490"
	I0915 06:36:48.959334 3199412 addons.go:234] Setting addon volcano=true in "addons-686490"
	I0915 06:36:48.959380 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.959896 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.967117 3199412 addons.go:69] Setting inspektor-gadget=true in profile "addons-686490"
	I0915 06:36:48.967206 3199412 addons.go:234] Setting addon inspektor-gadget=true in "addons-686490"
	I0915 06:36:48.967268 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.967813 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.877467 3199412 addons.go:69] Setting volumesnapshots=true in profile "addons-686490"
	I0915 06:36:48.982924 3199412 addons.go:234] Setting addon volumesnapshots=true in "addons-686490"
	I0915 06:36:48.982983 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:48.983540 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:48.877620 3199412 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0915 06:36:49.003891 3199412 addons.go:69] Setting metrics-server=true in profile "addons-686490"
	I0915 06:36:49.003992 3199412 addons.go:234] Setting addon metrics-server=true in "addons-686490"
	I0915 06:36:49.004362 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:49.004965 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:49.026410 3199412 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-686490"
	I0915 06:36:49.026449 3199412 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-686490"
	I0915 06:36:49.026485 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:49.026986 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:49.041888 3199412 addons.go:234] Setting addon default-storageclass=true in "addons-686490"
	I0915 06:36:49.042008 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:49.042597 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:49.063558 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0915 06:36:49.068948 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0915 06:36:49.069728 3199412 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0915 06:36:49.078454 3199412 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0915 06:36:49.081191 3199412 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0915 06:36:49.081229 3199412 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0915 06:36:49.081357 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.090690 3199412 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0915 06:36:49.098395 3199412 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0915 06:36:49.098426 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0915 06:36:49.098576 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.133337 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0915 06:36:49.139064 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0915 06:36:49.140106 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:49.148083 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0915 06:36:49.181991 3199412 out.go:177]   - Using image docker.io/registry:2.8.3
	I0915 06:36:49.188458 3199412 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-686490"
	I0915 06:36:49.188512 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:49.188977 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0915 06:36:49.190318 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:49.194034 3199412 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0915 06:36:49.194104 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0915 06:36:49.194216 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.213278 3199412 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0915 06:36:49.216160 3199412 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0915 06:36:49.216424 3199412 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:36:49.216466 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0915 06:36:49.216577 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.220067 3199412 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0915 06:36:49.220104 3199412 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0915 06:36:49.220215 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.247250 3199412 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0915 06:36:49.261634 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0915 06:36:49.264441 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0915 06:36:49.264554 3199412 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0915 06:36:49.266609 3199412 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0915 06:36:49.286420 3199412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0915 06:36:49.286515 3199412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0915 06:36:49.286621 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.268345 3199412 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0915 06:36:49.289984 3199412 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:36:49.294556 3199412 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:36:49.305295 3199412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0915 06:36:49.305325 3199412 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0915 06:36:49.305423 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.317757 3199412 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0915 06:36:49.317837 3199412 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0915 06:36:49.317947 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.328714 3199412 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:36:49.328740 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0915 06:36:49.328825 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.345054 3199412 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0915 06:36:49.348541 3199412 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:36:49.348642 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0915 06:36:49.348771 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.367745 3199412 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0915 06:36:49.376226 3199412 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:36:49.376260 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0915 06:36:49.376352 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.379404 3199412 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0915 06:36:49.382265 3199412 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0915 06:36:49.390875 3199412 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0915 06:36:49.400662 3199412 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 06:36:49.400805 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0915 06:36:49.400945 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.449804 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.463331 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.464084 3199412 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0915 06:36:49.464099 3199412 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0915 06:36:49.464159 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.464726 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.474705 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.481956 3199412 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0915 06:36:49.485009 3199412 out.go:177]   - Using image docker.io/busybox:stable
	I0915 06:36:49.487751 3199412 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:36:49.487781 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0915 06:36:49.487907 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:49.514272 3199412 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0915 06:36:49.597810 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.613637 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.622456 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.655420 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.663748 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.664609 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.686042 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.690080 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.700020 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:49.719301 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:50.203077 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0915 06:36:50.214510 3199412 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0915 06:36:50.214537 3199412 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0915 06:36:50.270805 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0915 06:36:50.282118 3199412 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0915 06:36:50.282144 3199412 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0915 06:36:50.306089 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0915 06:36:50.351822 3199412 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0915 06:36:50.351899 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0915 06:36:50.373382 3199412 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0915 06:36:50.373463 3199412 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0915 06:36:50.381054 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0915 06:36:50.384169 3199412 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0915 06:36:50.384239 3199412 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0915 06:36:50.413345 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0915 06:36:50.418320 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0915 06:36:50.420668 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0915 06:36:50.422594 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0915 06:36:50.424242 3199412 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0915 06:36:50.424295 3199412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0915 06:36:50.432536 3199412 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:36:50.432608 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0915 06:36:50.519747 3199412 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0915 06:36:50.519829 3199412 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0915 06:36:50.545181 3199412 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0915 06:36:50.545264 3199412 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0915 06:36:50.555806 3199412 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0915 06:36:50.555895 3199412 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0915 06:36:50.556212 3199412 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0915 06:36:50.556268 3199412 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0915 06:36:50.623080 3199412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0915 06:36:50.623108 3199412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0915 06:36:50.641735 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0915 06:36:50.757773 3199412 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:36:50.757803 3199412 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0915 06:36:50.780099 3199412 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0915 06:36:50.780127 3199412 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0915 06:36:50.786319 3199412 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0915 06:36:50.786346 3199412 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0915 06:36:50.790887 3199412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0915 06:36:50.790915 3199412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0915 06:36:50.826303 3199412 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0915 06:36:50.826330 3199412 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0915 06:36:50.906209 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0915 06:36:51.022535 3199412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0915 06:36:51.022564 3199412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0915 06:36:51.037219 3199412 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:36:51.037247 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0915 06:36:51.038072 3199412 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0915 06:36:51.038092 3199412 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0915 06:36:51.058347 3199412 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0915 06:36:51.058375 3199412 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0915 06:36:51.227541 3199412 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0915 06:36:51.227570 3199412 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0915 06:36:51.237736 3199412 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0915 06:36:51.237764 3199412 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0915 06:36:51.311288 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0915 06:36:51.342684 3199412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0915 06:36:51.342711 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0915 06:36:51.408829 3199412 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.894508715s)
	I0915 06:36:51.409054 3199412 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.122133923s)
	I0915 06:36:51.409100 3199412 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0915 06:36:51.409665 3199412 node_ready.go:35] waiting up to 6m0s for node "addons-686490" to be "Ready" ...
	I0915 06:36:51.414223 3199412 node_ready.go:49] node "addons-686490" has status "Ready":"True"
	I0915 06:36:51.414292 3199412 node_ready.go:38] duration metric: took 4.594936ms for node "addons-686490" to be "Ready" ...
	I0915 06:36:51.414317 3199412 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:36:51.436073 3199412 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace to be "Ready" ...
	I0915 06:36:51.568480 3199412 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:36:51.568554 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0915 06:36:51.689622 3199412 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0915 06:36:51.689648 3199412 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0915 06:36:51.770881 3199412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0915 06:36:51.770905 3199412 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0915 06:36:51.797191 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:36:51.859943 3199412 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:36:51.860019 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0915 06:36:51.912638 3199412 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-686490" context rescaled to 1 replicas
	I0915 06:36:52.039284 3199412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0915 06:36:52.039357 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0915 06:36:52.080158 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0915 06:36:52.401557 3199412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0915 06:36:52.401633 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0915 06:36:52.812701 3199412 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:36:52.812729 3199412 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0915 06:36:53.379151 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0915 06:36:53.446806 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:36:53.853877 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.650760484s)
	I0915 06:36:53.854140 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (3.583308713s)
	I0915 06:36:53.854204 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.548042926s)
	W0915 06:36:53.869882 3199412 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0915 06:36:54.301801 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.920660868s)
	I0915 06:36:54.301943 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.88853272s)
	I0915 06:36:54.301991 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (3.88360641s)
	I0915 06:36:55.495360 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:36:56.410503 3199412 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0915 06:36:56.410639 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:56.435249 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:56.896983 3199412 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0915 06:36:57.122865 3199412 addons.go:234] Setting addon gcp-auth=true in "addons-686490"
	I0915 06:36:57.122934 3199412 host.go:66] Checking if "addons-686490" exists ...
	I0915 06:36:57.123487 3199412 cli_runner.go:164] Run: docker container inspect addons-686490 --format={{.State.Status}}
	I0915 06:36:57.149957 3199412 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0915 06:36:57.150012 3199412 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-686490
	I0915 06:36:57.189922 3199412 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35877 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/addons-686490/id_rsa Username:docker}
	I0915 06:36:57.964789 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:36:59.135483 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (8.714743435s)
	I0915 06:36:59.135673 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.713014291s)
	I0915 06:36:59.136076 3199412 addons.go:475] Verifying addon ingress=true in "addons-686490"
	I0915 06:36:59.135766 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.229533531s)
	I0915 06:36:59.136308 3199412 addons.go:475] Verifying addon metrics-server=true in "addons-686490"
	I0915 06:36:59.135796 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.824478797s)
	I0915 06:36:59.135870 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.338597895s)
	W0915 06:36:59.136574 3199412 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:36:59.136602 3199412 retry.go:31] will retry after 242.45228ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0915 06:36:59.135924 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.055683477s)
	I0915 06:36:59.135988 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.493939996s)
	I0915 06:36:59.136668 3199412 addons.go:475] Verifying addon registry=true in "addons-686490"
	I0915 06:36:59.140883 3199412 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-686490 service yakd-dashboard -n yakd-dashboard
	
	I0915 06:36:59.140977 3199412 out.go:177] * Verifying ingress addon...
	I0915 06:36:59.146095 3199412 out.go:177] * Verifying registry addon...
	I0915 06:36:59.163983 3199412 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0915 06:36:59.165871 3199412 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0915 06:36:59.202355 3199412 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0915 06:36:59.202388 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:36:59.203537 3199412 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0915 06:36:59.203564 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:36:59.379266 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0915 06:36:59.696387 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:36:59.697409 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:00.063088 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.683832701s)
	I0915 06:37:00.063179 3199412 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-686490"
	I0915 06:37:00.063459 3199412 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.913476141s)
	I0915 06:37:00.066525 3199412 out.go:177] * Verifying csi-hostpath-driver addon...
	I0915 06:37:00.066694 3199412 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0915 06:37:00.070554 3199412 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0915 06:37:00.078819 3199412 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0915 06:37:00.087500 3199412 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0915 06:37:00.087589 3199412 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0915 06:37:00.116215 3199412 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0915 06:37:00.116316 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:00.200981 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:00.205066 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:00.239146 3199412 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0915 06:37:00.239238 3199412 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0915 06:37:00.307535 3199412 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:37:00.307684 3199412 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0915 06:37:00.368895 3199412 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0915 06:37:00.444068 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:00.581995 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:00.668861 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:00.671756 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:01.077786 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:01.171395 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:01.173155 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:01.285841 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.906511308s)
	I0915 06:37:01.521579 3199412 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.152640274s)
	I0915 06:37:01.524712 3199412 addons.go:475] Verifying addon gcp-auth=true in "addons-686490"
	I0915 06:37:01.529598 3199412 out.go:177] * Verifying gcp-auth addon...
	I0915 06:37:01.543882 3199412 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0915 06:37:01.548951 3199412 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:37:01.575924 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:01.670693 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:01.671784 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:02.076217 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:02.169195 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:02.172466 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:02.575558 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:02.668304 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:02.671426 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:02.943511 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:03.076015 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:03.169701 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:03.170695 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:03.576688 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:03.669134 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:03.672449 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:04.076405 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:04.171061 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:04.172635 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:04.575498 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:04.669704 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:04.670750 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:04.944079 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:05.151383 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:05.168449 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:05.171534 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:05.577907 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:05.669766 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:05.670667 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:06.076464 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:06.169879 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:06.172239 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:06.577734 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:06.669331 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:06.671284 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:07.075922 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:07.169344 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:07.171239 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:07.443102 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:07.576081 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:07.669260 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:07.671453 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:08.076124 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:08.168370 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:08.170632 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:08.575906 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:08.668519 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:08.675754 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:09.076039 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:09.169730 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:09.171346 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:09.578649 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:09.671064 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:09.672237 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:09.942666 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:10.075503 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:10.168927 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:10.171550 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:10.575499 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:10.669116 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:10.670175 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:11.075760 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:11.169331 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:11.170364 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:11.575621 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:11.668547 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:11.670480 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:11.942880 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:12.076271 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:12.169315 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:12.170836 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:12.575946 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:12.669160 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:12.669936 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:13.076192 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:13.169727 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:13.171074 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:13.575234 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:13.670668 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:13.671327 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:13.943150 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:14.076065 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:14.168920 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:14.171203 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:14.576201 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:14.669585 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:14.671383 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:15.075473 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:15.169157 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:15.170656 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:15.575825 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:15.668759 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:15.671124 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:16.075862 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:16.168348 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:16.171105 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:16.442864 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:16.575861 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:16.668961 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:16.671122 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:17.076421 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:17.168659 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:17.183979 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:17.577476 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:17.669239 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:17.671415 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:18.076244 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:18.168661 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:18.171337 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:18.574932 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:18.668371 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:18.670460 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:18.942922 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:19.076237 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:19.168460 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:19.171349 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:19.576323 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:19.668694 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:19.671226 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:20.075688 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:20.169151 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:20.171163 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:20.576154 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:20.668424 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:20.670216 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:21.075603 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:21.168282 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:21.170312 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:21.443531 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:21.576042 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:21.669153 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:21.670411 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:22.075199 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:22.168401 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:22.170507 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:22.575772 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:22.669280 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:22.671436 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:23.075661 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:23.168226 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:23.169611 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:23.445896 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:23.576451 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:23.669435 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:23.671512 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:24.075971 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:24.169326 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:24.170251 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:24.575589 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:24.669299 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:24.669827 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:25.076382 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:25.168240 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:25.170437 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:25.576148 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:25.669131 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:25.672024 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:25.941837 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:26.075397 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:26.168188 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:26.169809 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:26.575664 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:26.669767 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:26.671299 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:27.075772 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:27.168460 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:27.170453 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:27.651939 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:27.730430 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:27.731177 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:27.943829 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:28.077508 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:28.172992 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:28.174760 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:28.576025 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:28.669557 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:28.672814 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:29.075692 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:29.168862 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:29.169945 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:29.574975 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:29.672932 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:29.674045 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:30.076423 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:30.168907 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:30.171309 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:30.445159 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:30.651637 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:30.668378 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:30.670449 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:31.075791 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:31.169024 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:31.170622 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:31.576334 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:31.671076 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:31.671815 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:32.075811 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:32.168420 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:32.170144 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:32.651536 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:32.669567 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:32.672969 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:32.943572 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:33.076522 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:33.170025 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:33.171037 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:33.576202 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:33.672932 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:33.674578 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:34.075618 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:34.169287 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:34.170366 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:34.575438 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:34.669533 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:34.670876 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:35.076390 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:35.169244 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:35.171590 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:35.443749 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:35.578038 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:35.670913 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:35.672908 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:36.076683 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:36.168853 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:36.171746 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:36.575588 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:36.670510 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:36.671967 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:37.075555 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:37.169679 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:37.173630 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:37.580918 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:37.668440 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:37.671993 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:37.942596 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:38.076409 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:38.170251 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:38.172825 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:38.650200 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:38.669340 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:38.670878 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:39.076536 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:39.168814 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:39.171537 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:39.576182 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:39.669471 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:39.669647 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:39.943871 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:40.075509 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:40.170399 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:40.171059 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:40.575338 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:40.670686 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:40.672157 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:41.075319 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:41.168824 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:41.170854 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:41.576508 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:41.671330 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:41.673145 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:42.077755 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:42.169014 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:42.171641 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:42.443351 3199412 pod_ready.go:103] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"False"
	I0915 06:37:42.576259 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:42.670898 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:42.672824 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:43.075528 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:43.170825 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:43.172389 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:43.576687 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:43.671629 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:43.673504 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:43.944675 3199412 pod_ready.go:93] pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace has status "Ready":"True"
	I0915 06:37:43.944775 3199412 pod_ready.go:82] duration metric: took 52.508626138s for pod "coredns-7c65d6cfc9-9mpdr" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:43.944814 3199412 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-dwj9z" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:43.948066 3199412 pod_ready.go:98] error getting pod "coredns-7c65d6cfc9-dwj9z" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-dwj9z" not found
	I0915 06:37:43.948159 3199412 pod_ready.go:82] duration metric: took 3.302467ms for pod "coredns-7c65d6cfc9-dwj9z" in "kube-system" namespace to be "Ready" ...
	E0915 06:37:43.948219 3199412 pod_ready.go:67] WaitExtra: waitPodCondition: error getting pod "coredns-7c65d6cfc9-dwj9z" in "kube-system" namespace (skipping!): pods "coredns-7c65d6cfc9-dwj9z" not found
	I0915 06:37:43.948256 3199412 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-686490" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:43.957585 3199412 pod_ready.go:93] pod "etcd-addons-686490" in "kube-system" namespace has status "Ready":"True"
	I0915 06:37:43.957664 3199412 pod_ready.go:82] duration metric: took 9.377927ms for pod "etcd-addons-686490" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:43.957712 3199412 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-686490" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:43.967197 3199412 pod_ready.go:93] pod "kube-apiserver-addons-686490" in "kube-system" namespace has status "Ready":"True"
	I0915 06:37:43.967277 3199412 pod_ready.go:82] duration metric: took 9.512981ms for pod "kube-apiserver-addons-686490" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:43.967307 3199412 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-686490" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:43.973332 3199412 pod_ready.go:93] pod "kube-controller-manager-addons-686490" in "kube-system" namespace has status "Ready":"True"
	I0915 06:37:43.973426 3199412 pod_ready.go:82] duration metric: took 6.078685ms for pod "kube-controller-manager-addons-686490" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:43.973484 3199412 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2297r" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:44.075419 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:44.140092 3199412 pod_ready.go:93] pod "kube-proxy-2297r" in "kube-system" namespace has status "Ready":"True"
	I0915 06:37:44.140159 3199412 pod_ready.go:82] duration metric: took 166.652338ms for pod "kube-proxy-2297r" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:44.140186 3199412 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-686490" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:44.169649 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:44.172697 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:44.540456 3199412 pod_ready.go:93] pod "kube-scheduler-addons-686490" in "kube-system" namespace has status "Ready":"True"
	I0915 06:37:44.540482 3199412 pod_ready.go:82] duration metric: took 400.273313ms for pod "kube-scheduler-addons-686490" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:44.540495 3199412 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-w56rc" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:44.576386 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:44.672114 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:44.674070 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:45.076678 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:45.170042 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:45.171177 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:45.575819 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:45.669925 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:45.671277 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:46.075916 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:46.140678 3199412 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-w56rc" in "kube-system" namespace has status "Ready":"True"
	I0915 06:37:46.140703 3199412 pod_ready.go:82] duration metric: took 1.600199795s for pod "nvidia-device-plugin-daemonset-w56rc" in "kube-system" namespace to be "Ready" ...
	I0915 06:37:46.140719 3199412 pod_ready.go:39] duration metric: took 54.726355387s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0915 06:37:46.140738 3199412 api_server.go:52] waiting for apiserver process to appear ...
	I0915 06:37:46.140807 3199412 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:37:46.156585 3199412 api_server.go:72] duration metric: took 57.306574172s to wait for apiserver process to appear ...
	I0915 06:37:46.156611 3199412 api_server.go:88] waiting for apiserver healthz status ...
	I0915 06:37:46.156631 3199412 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0915 06:37:46.165632 3199412 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0915 06:37:46.166641 3199412 api_server.go:141] control plane version: v1.31.1
	I0915 06:37:46.166667 3199412 api_server.go:131] duration metric: took 10.049207ms to wait for apiserver health ...
	I0915 06:37:46.166716 3199412 system_pods.go:43] waiting for kube-system pods to appear ...
	I0915 06:37:46.168641 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:46.170523 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:46.346402 3199412 system_pods.go:59] 18 kube-system pods found
	I0915 06:37:46.346441 3199412 system_pods.go:61] "coredns-7c65d6cfc9-9mpdr" [b6e47f91-3cfe-4cab-b19b-5451fcc5ee46] Running
	I0915 06:37:46.346452 3199412 system_pods.go:61] "csi-hostpath-attacher-0" [5fd5b1cd-8e15-4078-bf48-38e0fb38a753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:37:46.346460 3199412 system_pods.go:61] "csi-hostpath-resizer-0" [76fee9f7-3f5b-48e4-aaa6-6b337e68e83a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:37:46.346469 3199412 system_pods.go:61] "csi-hostpathplugin-p6nhc" [42fbed30-eb12-4687-afb0-04e4d19d39c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:37:46.346474 3199412 system_pods.go:61] "etcd-addons-686490" [21a16d89-bfc7-4c3c-bf64-d5f0e33eb791] Running
	I0915 06:37:46.346479 3199412 system_pods.go:61] "kindnet-cj6zz" [c3de5731-7b59-4a38-92e3-4a3a4bddf9e4] Running
	I0915 06:37:46.346490 3199412 system_pods.go:61] "kube-apiserver-addons-686490" [11380aca-4df4-4c5f-be96-b634fa30c099] Running
	I0915 06:37:46.346494 3199412 system_pods.go:61] "kube-controller-manager-addons-686490" [6e753175-8219-4f8d-975f-d60ed07eb723] Running
	I0915 06:37:46.346503 3199412 system_pods.go:61] "kube-ingress-dns-minikube" [2832099d-e581-4789-9a0e-37be38999603] Running
	I0915 06:37:46.346507 3199412 system_pods.go:61] "kube-proxy-2297r" [aaa302ff-2fc6-4aa0-8a6c-896b5bae8397] Running
	I0915 06:37:46.346511 3199412 system_pods.go:61] "kube-scheduler-addons-686490" [818bee60-a44f-44af-91b8-b7f8b70f9c9e] Running
	I0915 06:37:46.346517 3199412 system_pods.go:61] "metrics-server-84c5f94fbc-fpnkx" [0ac54330-326c-4b01-a7a3-f324be97b1bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:37:46.346524 3199412 system_pods.go:61] "nvidia-device-plugin-daemonset-w56rc" [ba99ac53-6596-49f5-a42f-38380f636f28] Running
	I0915 06:37:46.346530 3199412 system_pods.go:61] "registry-66c9cd494c-shzcc" [25bbd5d5-6b38-48ac-b11e-55a962241296] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:37:46.346535 3199412 system_pods.go:61] "registry-proxy-r5v2g" [58cfd80d-2cf0-4456-841f-2268a328e3a0] Running
	I0915 06:37:46.346542 3199412 system_pods.go:61] "snapshot-controller-56fcc65765-fw8tc" [b3b9a951-2de3-4a7c-8b22-f945a3bd3255] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:37:46.346550 3199412 system_pods.go:61] "snapshot-controller-56fcc65765-jjw79" [ff541fa3-d0bf-464f-bb8a-808696d8a598] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:37:46.346555 3199412 system_pods.go:61] "storage-provisioner" [4aae9560-2f19-40e3-9a4c-6ef3d41a4931] Running
	I0915 06:37:46.346564 3199412 system_pods.go:74] duration metric: took 179.840281ms to wait for pod list to return data ...
	I0915 06:37:46.346573 3199412 default_sa.go:34] waiting for default service account to be created ...
	I0915 06:37:46.539566 3199412 default_sa.go:45] found service account: "default"
	I0915 06:37:46.539596 3199412 default_sa.go:55] duration metric: took 193.012569ms for default service account to be created ...
	I0915 06:37:46.539617 3199412 system_pods.go:116] waiting for k8s-apps to be running ...
	I0915 06:37:46.575515 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:46.669798 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0915 06:37:46.670048 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:46.746635 3199412 system_pods.go:86] 18 kube-system pods found
	I0915 06:37:46.746678 3199412 system_pods.go:89] "coredns-7c65d6cfc9-9mpdr" [b6e47f91-3cfe-4cab-b19b-5451fcc5ee46] Running
	I0915 06:37:46.746691 3199412 system_pods.go:89] "csi-hostpath-attacher-0" [5fd5b1cd-8e15-4078-bf48-38e0fb38a753] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0915 06:37:46.746699 3199412 system_pods.go:89] "csi-hostpath-resizer-0" [76fee9f7-3f5b-48e4-aaa6-6b337e68e83a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0915 06:37:46.746718 3199412 system_pods.go:89] "csi-hostpathplugin-p6nhc" [42fbed30-eb12-4687-afb0-04e4d19d39c1] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0915 06:37:46.746724 3199412 system_pods.go:89] "etcd-addons-686490" [21a16d89-bfc7-4c3c-bf64-d5f0e33eb791] Running
	I0915 06:37:46.746729 3199412 system_pods.go:89] "kindnet-cj6zz" [c3de5731-7b59-4a38-92e3-4a3a4bddf9e4] Running
	I0915 06:37:46.746741 3199412 system_pods.go:89] "kube-apiserver-addons-686490" [11380aca-4df4-4c5f-be96-b634fa30c099] Running
	I0915 06:37:46.746746 3199412 system_pods.go:89] "kube-controller-manager-addons-686490" [6e753175-8219-4f8d-975f-d60ed07eb723] Running
	I0915 06:37:46.746758 3199412 system_pods.go:89] "kube-ingress-dns-minikube" [2832099d-e581-4789-9a0e-37be38999603] Running
	I0915 06:37:46.746764 3199412 system_pods.go:89] "kube-proxy-2297r" [aaa302ff-2fc6-4aa0-8a6c-896b5bae8397] Running
	I0915 06:37:46.746769 3199412 system_pods.go:89] "kube-scheduler-addons-686490" [818bee60-a44f-44af-91b8-b7f8b70f9c9e] Running
	I0915 06:37:46.746781 3199412 system_pods.go:89] "metrics-server-84c5f94fbc-fpnkx" [0ac54330-326c-4b01-a7a3-f324be97b1bd] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0915 06:37:46.746785 3199412 system_pods.go:89] "nvidia-device-plugin-daemonset-w56rc" [ba99ac53-6596-49f5-a42f-38380f636f28] Running
	I0915 06:37:46.746800 3199412 system_pods.go:89] "registry-66c9cd494c-shzcc" [25bbd5d5-6b38-48ac-b11e-55a962241296] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0915 06:37:46.746804 3199412 system_pods.go:89] "registry-proxy-r5v2g" [58cfd80d-2cf0-4456-841f-2268a328e3a0] Running
	I0915 06:37:46.746811 3199412 system_pods.go:89] "snapshot-controller-56fcc65765-fw8tc" [b3b9a951-2de3-4a7c-8b22-f945a3bd3255] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:37:46.746818 3199412 system_pods.go:89] "snapshot-controller-56fcc65765-jjw79" [ff541fa3-d0bf-464f-bb8a-808696d8a598] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0915 06:37:46.746824 3199412 system_pods.go:89] "storage-provisioner" [4aae9560-2f19-40e3-9a4c-6ef3d41a4931] Running
	I0915 06:37:46.746834 3199412 system_pods.go:126] duration metric: took 207.208704ms to wait for k8s-apps to be running ...
	I0915 06:37:46.746849 3199412 system_svc.go:44] waiting for kubelet service to be running ....
	I0915 06:37:46.746913 3199412 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:37:46.759447 3199412 system_svc.go:56] duration metric: took 12.588154ms WaitForService to wait for kubelet
	I0915 06:37:46.759476 3199412 kubeadm.go:582] duration metric: took 57.909470014s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0915 06:37:46.759496 3199412 node_conditions.go:102] verifying NodePressure condition ...
	I0915 06:37:46.940161 3199412 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0915 06:37:46.940195 3199412 node_conditions.go:123] node cpu capacity is 2
	I0915 06:37:46.940208 3199412 node_conditions.go:105] duration metric: took 180.706723ms to run NodePressure ...
	I0915 06:37:46.940239 3199412 start.go:241] waiting for startup goroutines ...
	I0915 06:37:47.076163 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:47.170185 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:47.172081 3199412 kapi.go:107] duration metric: took 48.006211941s to wait for kubernetes.io/minikube-addons=registry ...
	I0915 06:37:47.650058 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:47.669350 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:48.076264 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:48.169248 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:48.576216 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:48.668660 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:49.076783 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:49.169896 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:49.649488 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:49.668157 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:50.082102 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:50.170074 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:50.575481 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:50.668295 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:51.084553 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:51.169882 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:51.577160 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:51.678944 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:52.150512 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:52.168377 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:52.576003 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:52.669092 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:53.075814 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:53.169193 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:53.576022 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:53.668612 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:54.075077 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:54.170894 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:54.651344 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:54.668854 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:55.078939 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:55.169335 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:55.576122 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:55.668969 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:56.077167 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:56.168559 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:56.576981 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:56.668234 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:57.153910 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:57.168808 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:57.575491 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:57.669067 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:58.077263 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:58.168597 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:58.575769 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:58.669301 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:59.076798 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:59.173214 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:37:59.576352 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:37:59.670504 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:00.125970 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:00.186325 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:00.575219 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:00.668843 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:01.076038 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:01.169743 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:01.575925 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:01.669120 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:02.076595 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:02.169737 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:02.576094 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:02.669068 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:03.075493 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:03.169117 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:03.575749 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:03.669430 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:04.076818 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:04.169064 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:04.650727 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:04.671403 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:05.076941 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:05.168884 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:05.576471 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:05.668761 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:06.081373 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:06.168487 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:06.583720 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:06.670753 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:07.079503 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:07.168929 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:07.576388 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:07.669730 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:08.076777 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:08.168881 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:08.575586 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:08.668414 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:09.076812 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:09.172132 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:09.575920 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:09.669108 3199412 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0915 06:38:10.079432 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:10.171436 3199412 kapi.go:107] duration metric: took 1m11.00744972s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0915 06:38:10.575999 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:11.089591 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:11.577210 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:12.075593 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:12.577758 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:13.076363 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:13.650801 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:14.076100 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:14.576409 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:15.076364 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:15.575104 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:16.075118 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:16.576855 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:17.075899 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0915 06:38:17.575495 3199412 kapi.go:107] duration metric: took 1m17.504943727s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0915 06:38:24.547544 3199412 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0915 06:38:24.547572 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:25.048233 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:25.549328 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:26.048480 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:26.547557 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:27.047068 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:27.547945 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:28.048123 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:28.547967 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:29.048031 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:29.547992 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:30.048682 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:30.549481 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:31.047918 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:31.546805 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:32.048024 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:32.547839 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:33.047790 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:33.547316 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:34.048226 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:34.548024 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:35.047610 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:35.547310 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:36.047641 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:36.547077 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:37.047832 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:37.547469 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:38.048705 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:38.547532 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:39.047646 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:39.547450 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:40.048068 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:40.548100 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:41.047679 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:41.547509 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:42.047522 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:42.547554 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:43.048654 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:43.547238 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:44.049123 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:44.548481 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:45.047616 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:45.547943 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:46.048248 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:46.548304 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:47.047904 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:47.547471 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:48.047799 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:48.551027 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:49.048332 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:49.547580 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:50.047718 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:50.547430 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:51.047960 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:51.547907 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:52.048115 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:52.548008 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:53.047975 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:53.547883 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:54.047668 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:54.547333 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:55.047983 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:55.547417 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:56.048117 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:56.548196 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:57.048043 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:57.548090 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:58.048440 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:58.546895 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:59.047856 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:38:59.547363 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:00.065656 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:00.548966 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:01.047095 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:01.547414 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:02.047479 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:02.547108 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:03.047822 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:03.547192 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:04.047800 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:04.548259 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:05.048567 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:05.548377 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:06.048133 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:06.547721 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:07.047972 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:07.547706 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:08.047595 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:08.548042 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:09.047824 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:09.550068 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:10.048187 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:10.547548 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:11.047385 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:11.548103 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:12.048217 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:12.547743 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:13.047417 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:13.547325 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:14.047847 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:14.548203 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:15.048149 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:15.547177 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:16.048433 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:16.547791 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:17.047344 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:17.547980 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:18.048559 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:18.547828 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:19.047927 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:19.548152 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:20.047742 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:20.548639 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:21.047936 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:21.547466 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:22.047509 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:22.547504 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:23.047644 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:23.547361 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:24.047991 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:24.547922 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:25.048154 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:25.547842 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:26.048578 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:26.547894 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:27.047711 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:27.547252 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:28.049107 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:28.547681 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:29.047688 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:29.548164 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.066449 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:30.547591 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.048603 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:31.555848 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.047570 3199412 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0915 06:39:32.548006 3199412 kapi.go:107] duration metric: took 2m31.004124137s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0915 06:39:32.550770 3199412 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-686490 cluster.
	I0915 06:39:32.553267 3199412 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0915 06:39:32.555805 3199412 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0915 06:39:32.558462 3199412 out.go:177] * Enabled addons: cloud-spanner, storage-provisioner-rancher, storage-provisioner, ingress-dns, nvidia-device-plugin, volcano, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0915 06:39:32.561201 3199412 addons.go:510] duration metric: took 2m43.710856295s for enable addons: enabled=[cloud-spanner storage-provisioner-rancher storage-provisioner ingress-dns nvidia-device-plugin volcano metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0915 06:39:32.561260 3199412 start.go:246] waiting for cluster config update ...
	I0915 06:39:32.561283 3199412 start.go:255] writing updated cluster config ...
	I0915 06:39:32.562045 3199412 ssh_runner.go:195] Run: rm -f paused
	I0915 06:39:32.912490 3199412 start.go:600] kubectl: 1.31.0, cluster: 1.31.1 (minor skew: 0)
	I0915 06:39:32.915237 3199412 out.go:177] * Done! kubectl is now configured to use "addons-686490" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	c9d9d48efc885       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   36c61f5c6a503       gadget-jhcdf
	653bd458ed033       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   d0eb6e32e5cb7       gcp-auth-89d5ffd79-58ldw
	7974c1bd19796       ee6d597e62dc8       4 minutes ago       Running             csi-snapshotter                          0                   2421c361ad9d0       csi-hostpathplugin-p6nhc
	4faab1ef09df7       642ded511e141       4 minutes ago       Running             csi-provisioner                          0                   2421c361ad9d0       csi-hostpathplugin-p6nhc
	f9989622a02ad       922312104da8a       4 minutes ago       Running             liveness-probe                           0                   2421c361ad9d0       csi-hostpathplugin-p6nhc
	1104e178bde78       08f6b2990811a       4 minutes ago       Running             hostpath                                 0                   2421c361ad9d0       csi-hostpathplugin-p6nhc
	8917711a03d9a       0107d56dbc0be       4 minutes ago       Running             node-driver-registrar                    0                   2421c361ad9d0       csi-hostpathplugin-p6nhc
	9f6c8cb2e123f       8b46b1cd48760       4 minutes ago       Running             admission                                0                   d56afdcdb2b4e       volcano-admission-77d7d48b68-44hlz
	6b47f6197e53f       289a818c8d9c5       4 minutes ago       Running             controller                               0                   c5d4c77ea9516       ingress-nginx-controller-bc57996ff-4lvvm
	bf580ae358fc8       d9c7ad4c226bf       4 minutes ago       Running             volcano-scheduler                        0                   26ae849e349fe       volcano-scheduler-576bc46687-knjtl
	fcd50b579591c       1461903ec4fe9       4 minutes ago       Running             csi-external-health-monitor-controller   0                   2421c361ad9d0       csi-hostpathplugin-p6nhc
	dac1698fbdd49       1505f556b3a7b       4 minutes ago       Running             volcano-controllers                      0                   720cfb8fd77fa       volcano-controllers-56675bb4d5-jx4lp
	a7937ffa79bed       487fa743e1e22       4 minutes ago       Running             csi-resizer                              0                   06ff23b4ab0b5       csi-hostpath-resizer-0
	cdae09ed6e51e       9a80d518f102c       4 minutes ago       Running             csi-attacher                             0                   a1755b434c2f3       csi-hostpath-attacher-0
	2ad1926faf58d       420193b27261a       4 minutes ago       Exited              patch                                    0                   4b607360c2072       ingress-nginx-admission-patch-65s9f
	a0b39790682b0       7ce2150c8929b       4 minutes ago       Running             local-path-provisioner                   0                   01b597175c76b       local-path-provisioner-86d989889c-88rv7
	979d4c5cc44ed       4d1e5c3e97420       4 minutes ago       Running             volume-snapshot-controller               0                   f5b958989ae7d       snapshot-controller-56fcc65765-fw8tc
	af94272704f1d       420193b27261a       4 minutes ago       Exited              create                                   0                   b6524fa430b14       ingress-nginx-admission-create-jphzg
	d51b03da586d0       77bdba588b953       4 minutes ago       Running             yakd                                     0                   7942fd5183f32       yakd-dashboard-67d98fc6b-dms56
	97deb3e731e14       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   6959c70919bfe       snapshot-controller-56fcc65765-jjw79
	2ee6dfb51c298       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   e04ea30fe333e       metrics-server-84c5f94fbc-fpnkx
	ccd9a06e377a0       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   ce1f9db96919f       registry-66c9cd494c-shzcc
	861eb816faaf6       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   4b86963747396       nvidia-device-plugin-daemonset-w56rc
	23a4d02b3eb8c       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   5afcc110fe232       coredns-7c65d6cfc9-9mpdr
	18b2b4efd729a       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   5915be886ead6       registry-proxy-r5v2g
	9ae822402feb9       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   bdf73c523051f       cloud-spanner-emulator-769b77f747-tgszx
	86523a04e0e3f       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   e26d4df21cfc9       kube-ingress-dns-minikube
	b13f66ecc2185       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   dcc0bfefdc617       storage-provisioner
	990aad65ad167       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   e22ce6ea535ed       kindnet-cj6zz
	27fcca6e3bf1a       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   d5041773d116a       kube-proxy-2297r
	a9606056e5592       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   aafb30e1e6544       kube-controller-manager-addons-686490
	dc7e13e7f6de4       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   429025b5d304b       kube-apiserver-addons-686490
	46962b840eb53       27e3830e14027       6 minutes ago       Running             etcd                                     0                   8f63117ab582c       etcd-addons-686490
	fe25bda02288f       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   0f1fee5434dbe       kube-scheduler-addons-686490
	
	
	==> containerd <==
	Sep 15 06:39:43 addons-686490 containerd[810]: time="2024-09-15T06:39:43.513080912Z" level=info msg="RemovePodSandbox \"7d133cd2471f829886059309419da00ce8a1883549da6f2612b032d6e5e1fe5f\" returns successfully"
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.410577078Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.538318923Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.540091806Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.543874892Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 133.247ms"
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.543927494Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.546013649Z" level=info msg="CreateContainer within sandbox \"36c61f5c6a503c155b239d8f871495fb735f1075e0e48a160ecccc35abf4c5fd\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.568125421Z" level=info msg="CreateContainer within sandbox \"36c61f5c6a503c155b239d8f871495fb735f1075e0e48a160ecccc35abf4c5fd\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87\""
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.569736929Z" level=info msg="StartContainer for \"c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87\""
	Sep 15 06:40:10 addons-686490 containerd[810]: time="2024-09-15T06:40:10.634414459Z" level=info msg="StartContainer for \"c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87\" returns successfully"
	Sep 15 06:40:12 addons-686490 containerd[810]: time="2024-09-15T06:40:12.319524834Z" level=info msg="shim disconnected" id=c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87 namespace=k8s.io
	Sep 15 06:40:12 addons-686490 containerd[810]: time="2024-09-15T06:40:12.319592328Z" level=warning msg="cleaning up after shim disconnected" id=c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87 namespace=k8s.io
	Sep 15 06:40:12 addons-686490 containerd[810]: time="2024-09-15T06:40:12.319604053Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 15 06:40:12 addons-686490 containerd[810]: time="2024-09-15T06:40:12.493607293Z" level=info msg="RemoveContainer for \"133cc8b3e3497c081bdc1a06347826c8a5407511f87992faca91f48e9be7a83b\""
	Sep 15 06:40:12 addons-686490 containerd[810]: time="2024-09-15T06:40:12.502483391Z" level=info msg="RemoveContainer for \"133cc8b3e3497c081bdc1a06347826c8a5407511f87992faca91f48e9be7a83b\" returns successfully"
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.517163539Z" level=info msg="RemoveContainer for \"d5a9443847918ce0bb0a2807cf1721bbb631a7bd76885aa347aee60d6d9a1207\""
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.524430369Z" level=info msg="RemoveContainer for \"d5a9443847918ce0bb0a2807cf1721bbb631a7bd76885aa347aee60d6d9a1207\" returns successfully"
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.526671286Z" level=info msg="StopPodSandbox for \"a4ab4d97e21e8e2ebeaceb85b485dbb63084c06f334ac3d4b73bb7d5bf9d9af3\""
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.534536371Z" level=info msg="TearDown network for sandbox \"a4ab4d97e21e8e2ebeaceb85b485dbb63084c06f334ac3d4b73bb7d5bf9d9af3\" successfully"
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.534578914Z" level=info msg="StopPodSandbox for \"a4ab4d97e21e8e2ebeaceb85b485dbb63084c06f334ac3d4b73bb7d5bf9d9af3\" returns successfully"
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.535107394Z" level=info msg="RemovePodSandbox for \"a4ab4d97e21e8e2ebeaceb85b485dbb63084c06f334ac3d4b73bb7d5bf9d9af3\""
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.535148255Z" level=info msg="Forcibly stopping sandbox \"a4ab4d97e21e8e2ebeaceb85b485dbb63084c06f334ac3d4b73bb7d5bf9d9af3\""
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.542707624Z" level=info msg="TearDown network for sandbox \"a4ab4d97e21e8e2ebeaceb85b485dbb63084c06f334ac3d4b73bb7d5bf9d9af3\" successfully"
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.549393380Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4ab4d97e21e8e2ebeaceb85b485dbb63084c06f334ac3d4b73bb7d5bf9d9af3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 15 06:40:43 addons-686490 containerd[810]: time="2024-09-15T06:40:43.549513049Z" level=info msg="RemovePodSandbox \"a4ab4d97e21e8e2ebeaceb85b485dbb63084c06f334ac3d4b73bb7d5bf9d9af3\" returns successfully"
	
	
	==> coredns [23a4d02b3eb8cb83c646afb73ef04a60b387546d8486bda03b96cf91b0f86470] <==
	[INFO] 10.244.0.2:44893 - 54938 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053192s
	[INFO] 10.244.0.2:60144 - 45849 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.00175919s
	[INFO] 10.244.0.2:60144 - 32278 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001563535s
	[INFO] 10.244.0.2:57817 - 8991 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062456s
	[INFO] 10.244.0.2:57817 - 53021 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000041139s
	[INFO] 10.244.0.2:45062 - 39598 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000099133s
	[INFO] 10.244.0.2:45062 - 417 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000060052s
	[INFO] 10.244.0.2:42719 - 53996 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00008063s
	[INFO] 10.244.0.2:42719 - 5864 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000285302s
	[INFO] 10.244.0.2:43080 - 15970 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000111973s
	[INFO] 10.244.0.2:43080 - 2656 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000095957s
	[INFO] 10.244.0.2:43551 - 4466 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003056122s
	[INFO] 10.244.0.2:43551 - 16752 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.003061635s
	[INFO] 10.244.0.2:50554 - 1527 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000080228s
	[INFO] 10.244.0.2:50554 - 59188 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000047917s
	[INFO] 10.244.0.24:53225 - 44378 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000219417s
	[INFO] 10.244.0.24:32988 - 10458 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000152899s
	[INFO] 10.244.0.24:54209 - 36125 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000191356s
	[INFO] 10.244.0.24:39981 - 29498 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000091682s
	[INFO] 10.244.0.24:40410 - 51144 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000131894s
	[INFO] 10.244.0.24:56476 - 9086 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000158979s
	[INFO] 10.244.0.24:54250 - 19224 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003049819s
	[INFO] 10.244.0.24:50684 - 459 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003207025s
	[INFO] 10.244.0.24:44202 - 4019 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.002667969s
	[INFO] 10.244.0.24:53907 - 12818 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.00375162s
	
	
	==> describe nodes <==
	Name:               addons-686490
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-686490
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7a3ca67a20528f5dabbb456e8e4ce542b58ef23a
	                    minikube.k8s.io/name=addons-686490
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_15T06_36_44_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-686490
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-686490"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 15 Sep 2024 06:36:40 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-686490
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 15 Sep 2024 06:42:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 15 Sep 2024 06:39:48 +0000   Sun, 15 Sep 2024 06:36:37 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 15 Sep 2024 06:39:48 +0000   Sun, 15 Sep 2024 06:36:37 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 15 Sep 2024 06:39:48 +0000   Sun, 15 Sep 2024 06:36:37 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 15 Sep 2024 06:39:48 +0000   Sun, 15 Sep 2024 06:36:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-686490
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 5b53861fce9a47bf87d0b484003e7181
	  System UUID:                fd1cc281-9a31-4545-832a-b62369577add
	  Boot ID:                    641d344d-3095-4acf-ad25-2210e6e532b0
	  Kernel Version:             5.15.0-1069-aws
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-tgszx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  gadget                      gadget-jhcdf                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-58ldw                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-4lvvm    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-9mpdr                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kube-system                 csi-hostpathplugin-p6nhc                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-686490                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-cj6zz                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m3s
	  kube-system                 kube-apiserver-addons-686490                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-686490       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 kube-proxy-2297r                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m3s
	  kube-system                 kube-scheduler-addons-686490                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 metrics-server-84c5f94fbc-fpnkx             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m57s
	  kube-system                 nvidia-device-plugin-daemonset-w56rc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 registry-66c9cd494c-shzcc                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 registry-proxy-r5v2g                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-fw8tc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-jjw79        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m57s
	  local-path-storage          local-path-provisioner-86d989889c-88rv7     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-77d7d48b68-44hlz          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-jx4lp        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  volcano-system              volcano-scheduler-576bc46687-knjtl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-dms56              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m1s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  6m15s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 6m15s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m15s (x8 over 6m15s)  kubelet          Node addons-686490 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m15s (x7 over 6m15s)  kubelet          Node addons-686490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m15s (x7 over 6m15s)  kubelet          Node addons-686490 status is now: NodeHasSufficientPID
	  Normal   Starting                 6m15s                  kubelet          Starting kubelet.
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-686490 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-686490 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-686490 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s                   node-controller  Node addons-686490 event: Registered Node addons-686490 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [46962b840eb533ce5bc92b772e8ebe9cfc999364bbb6e9090a71bf9c15c2286e] <==
	{"level":"info","ts":"2024-09-15T06:36:37.008531Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-15T06:36:37.008773Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-15T06:36:37.008796Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-15T06:36:37.008900Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:36:37.008911Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-15T06:36:37.679053Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-15T06:36:37.679104Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-15T06:36:37.679132Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-15T06:36:37.679151Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-15T06:36:37.679168Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:36:37.679180Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-15T06:36:37.679194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-15T06:36:37.682328Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:36:37.683378Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-686490 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-15T06:36:37.690865Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:36:37.691204Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-15T06:36:37.691613Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:36:37.691903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:36:37.692056Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-15T06:36:37.694776Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:36:37.696084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-15T06:36:37.715921Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-15T06:36:37.717242Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-09-15T06:36:37.723090Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-15T06:36:37.723262Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> gcp-auth [653bd458ed0330055a6fd515d000358a38068ff11aa00531a0727c03a55f65d5] <==
	2024/09/15 06:39:31 GCP Auth Webhook started!
	2024/09/15 06:39:49 Ready to marshal response ...
	2024/09/15 06:39:49 Ready to write response ...
	2024/09/15 06:39:50 Ready to marshal response ...
	2024/09/15 06:39:50 Ready to write response ...
	
	
	==> kernel <==
	 06:42:51 up 14:25,  0 users,  load average: 0.62, 1.72, 2.91
	Linux addons-686490 5.15.0-1069-aws #75~20.04.1-Ubuntu SMP Mon Aug 19 16:22:47 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [990aad65ad1672fb0e2565b2f4424df17836e7dbc2aabeaeaf556d800b1c5c98] <==
	I0915 06:40:49.920891       1 main.go:299] handling current node
	I0915 06:40:59.922171       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:40:59.922208       1 main.go:299] handling current node
	I0915 06:41:09.928044       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:09.928079       1 main.go:299] handling current node
	I0915 06:41:19.929637       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:19.929678       1 main.go:299] handling current node
	I0915 06:41:29.927116       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:29.927303       1 main.go:299] handling current node
	I0915 06:41:39.928938       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:39.928970       1 main.go:299] handling current node
	I0915 06:41:49.920708       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:49.920942       1 main.go:299] handling current node
	I0915 06:41:59.923832       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:41:59.923949       1 main.go:299] handling current node
	I0915 06:42:09.929081       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:09.929115       1 main.go:299] handling current node
	I0915 06:42:19.920890       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:19.921128       1 main.go:299] handling current node
	I0915 06:42:29.922954       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:29.922988       1 main.go:299] handling current node
	I0915 06:42:39.928765       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:39.928822       1 main.go:299] handling current node
	I0915 06:42:49.920574       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0915 06:42:49.920623       1 main.go:299] handling current node
	
	
	==> kube-apiserver [dc7e13e7f6de4a913fead165eacdb6774ddd2714fd5632f91fce740ebd630e36] <==
	E0915 06:38:04.473230       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.255.231:443: connect: connection refused" logger="UnhandledError"
	W0915 06:38:04.475045       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:04.555528       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.255.231:443: connect: connection refused
	E0915 06:38:04.555577       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.255.231:443: connect: connection refused" logger="UnhandledError"
	W0915 06:38:04.557218       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:05.440452       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:06.460752       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:07.537230       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:08.547835       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:09.554221       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:10.619396       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:11.662850       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:12.676630       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:13.685356       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:14.769820       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:15.854184       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:16.868208       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.103.72.145:443: connect: connection refused
	W0915 06:38:24.448713       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.255.231:443: connect: connection refused
	E0915 06:38:24.448760       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.255.231:443: connect: connection refused" logger="UnhandledError"
	W0915 06:39:04.483870       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.255.231:443: connect: connection refused
	E0915 06:39:04.483921       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.255.231:443: connect: connection refused" logger="UnhandledError"
	W0915 06:39:04.562494       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.101.255.231:443: connect: connection refused
	E0915 06:39:04.562539       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.101.255.231:443: connect: connection refused" logger="UnhandledError"
	I0915 06:39:49.462029       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0915 06:39:49.517849       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [a9606056e559245445645aa8736da2bc84aaa350a0f8d8e66223017eae0f7852] <==
	I0915 06:39:04.527583       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0915 06:39:04.571245       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:04.580274       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:04.589081       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:04.600836       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:05.290987       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:06.299069       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:06.313125       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0915 06:39:07.307799       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:07.421876       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:07.436652       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0915 06:39:08.313335       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:08.322409       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:08.330317       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0915 06:39:08.444405       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0915 06:39:08.455674       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0915 06:39:08.460649       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0915 06:39:32.414931       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="14.891883ms"
	I0915 06:39:32.415638       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="663.124µs"
	I0915 06:39:38.027487       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0915 06:39:38.035268       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0915 06:39:38.078668       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0915 06:39:38.087608       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0915 06:39:48.095699       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-686490"
	I0915 06:39:49.192774       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [27fcca6e3bf1ac37b3a6bc5aa262af1fff59dad46bca1f311d9073ca29376cc5] <==
	I0915 06:36:49.774611       1 server_linux.go:66] "Using iptables proxy"
	I0915 06:36:49.923174       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0915 06:36:49.923252       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0915 06:36:49.982965       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0915 06:36:49.983070       1 server_linux.go:169] "Using iptables Proxier"
	I0915 06:36:49.986670       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0915 06:36:49.987096       1 server.go:483] "Version info" version="v1.31.1"
	I0915 06:36:49.987111       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0915 06:36:49.992465       1 config.go:199] "Starting service config controller"
	I0915 06:36:49.992508       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0915 06:36:49.992551       1 config.go:105] "Starting endpoint slice config controller"
	I0915 06:36:49.992556       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0915 06:36:49.993373       1 config.go:328] "Starting node config controller"
	I0915 06:36:49.993443       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0915 06:36:50.095206       1 shared_informer.go:320] Caches are synced for node config
	I0915 06:36:50.095252       1 shared_informer.go:320] Caches are synced for service config
	I0915 06:36:50.095294       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [fe25bda02288fd2b88ac6623c00de8323ddfe28c513e76bd18694e2d9834e228] <==
	W0915 06:36:41.053606       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:36:41.054643       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0915 06:36:41.053677       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0915 06:36:41.054833       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.053737       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0915 06:36:41.055174       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.053802       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:36:41.055328       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.053864       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0915 06:36:41.055477       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.053933       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0915 06:36:41.055708       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.053993       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:36:41.055870       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.054039       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0915 06:36:41.056261       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.948979       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0915 06:36:41.949027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.952334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0915 06:36:41.952576       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:41.999567       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0915 06:36:41.999625       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0915 06:36:42.356567       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0915 06:36:42.356616       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I0915 06:36:44.034620       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 15 06:40:53 addons-686490 kubelet[1499]: E0915 06:40:53.410896    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:41:07 addons-686490 kubelet[1499]: I0915 06:41:07.409081    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:41:07 addons-686490 kubelet[1499]: E0915 06:41:07.409737    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:41:20 addons-686490 kubelet[1499]: I0915 06:41:20.408969    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:41:20 addons-686490 kubelet[1499]: E0915 06:41:20.409199    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:41:21 addons-686490 kubelet[1499]: I0915 06:41:21.409862    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w56rc" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 06:41:29 addons-686490 kubelet[1499]: I0915 06:41:29.409463    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-r5v2g" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 06:41:30 addons-686490 kubelet[1499]: I0915 06:41:30.409920    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-shzcc" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 06:41:35 addons-686490 kubelet[1499]: I0915 06:41:35.410624    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:41:35 addons-686490 kubelet[1499]: E0915 06:41:35.410816    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:41:48 addons-686490 kubelet[1499]: I0915 06:41:48.409700    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:41:48 addons-686490 kubelet[1499]: E0915 06:41:48.409926    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:42:03 addons-686490 kubelet[1499]: I0915 06:42:03.410352    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:42:03 addons-686490 kubelet[1499]: E0915 06:42:03.411126    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:42:14 addons-686490 kubelet[1499]: I0915 06:42:14.409545    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:42:14 addons-686490 kubelet[1499]: E0915 06:42:14.409764    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:42:23 addons-686490 kubelet[1499]: I0915 06:42:23.410347    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-w56rc" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 06:42:26 addons-686490 kubelet[1499]: I0915 06:42:26.409134    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:42:26 addons-686490 kubelet[1499]: E0915 06:42:26.409356    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:42:39 addons-686490 kubelet[1499]: I0915 06:42:39.409784    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:42:39 addons-686490 kubelet[1499]: E0915 06:42:39.409982    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	Sep 15 06:42:42 addons-686490 kubelet[1499]: I0915 06:42:42.409250    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-shzcc" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 06:42:43 addons-686490 kubelet[1499]: I0915 06:42:43.410526    1499 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-r5v2g" secret="" err="secret \"gcp-auth\" not found"
	Sep 15 06:42:51 addons-686490 kubelet[1499]: I0915 06:42:51.409317    1499 scope.go:117] "RemoveContainer" containerID="c9d9d48efc885fa19d26d128b432ab78d0cfeca46e58313d2fae9827f55bee87"
	Sep 15 06:42:51 addons-686490 kubelet[1499]: E0915 06:42:51.409529    1499 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-jhcdf_gadget(f2d35f18-8bc4-4f2d-95fe-ec00bf377399)\"" pod="gadget/gadget-jhcdf" podUID="f2d35f18-8bc4-4f2d-95fe-ec00bf377399"
	
	
	==> storage-provisioner [b13f66ecc21856df3bb7e5b37f832d2c23a1a4db68cd1e3cbced0bc335b2bdea] <==
	I0915 06:36:55.136706       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0915 06:36:55.151728       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0915 06:36:55.151909       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0915 06:36:55.166618       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0915 06:36:55.167169       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46b310fa-fb06-4f5a-bbe9-8c3c627d8a94", APIVersion:"v1", ResourceVersion:"617", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-686490_4b93a317-ced5-421e-adf6-16b1f6382045 became leader
	I0915 06:36:55.167282       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-686490_4b93a317-ced5-421e-adf6-16b1f6382045!
	I0915 06:36:55.269943       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-686490_4b93a317-ced5-421e-adf6-16b1f6382045!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-686490 -n addons-686490
helpers_test.go:261: (dbg) Run:  kubectl --context addons-686490 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-jphzg ingress-nginx-admission-patch-65s9f test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-686490 describe pod ingress-nginx-admission-create-jphzg ingress-nginx-admission-patch-65s9f test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-686490 describe pod ingress-nginx-admission-create-jphzg ingress-nginx-admission-patch-65s9f test-job-nginx-0: exit status 1 (84.880633ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-jphzg" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-65s9f" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-686490 describe pod ingress-nginx-admission-create-jphzg ingress-nginx-admission-patch-65s9f test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (199.93s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.85
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 5.14
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.08
18 TestDownloadOnly/v1.31.1/DeleteAll 0.22
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 217.81
31 TestAddons/serial/GCPAuth/Namespaces 0.18
33 TestAddons/parallel/Registry 15.31
34 TestAddons/parallel/Ingress 20.75
35 TestAddons/parallel/InspektorGadget 11.03
36 TestAddons/parallel/MetricsServer 5.79
39 TestAddons/parallel/CSI 45.9
40 TestAddons/parallel/Headlamp 18.86
41 TestAddons/parallel/CloudSpanner 5.7
42 TestAddons/parallel/LocalPath 52.76
43 TestAddons/parallel/NvidiaDevicePlugin 5.7
44 TestAddons/parallel/Yakd 10.87
45 TestAddons/StoppedEnableDisable 12.34
46 TestCertOptions 36.14
47 TestCertExpiration 224.64
49 TestForceSystemdFlag 40.95
50 TestForceSystemdEnv 43.6
51 TestDockerEnvContainerd 45.9
56 TestErrorSpam/setup 30.96
57 TestErrorSpam/start 0.79
58 TestErrorSpam/status 1.04
59 TestErrorSpam/pause 1.82
60 TestErrorSpam/unpause 1.87
61 TestErrorSpam/stop 2.15
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 51.74
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6
68 TestFunctional/serial/KubeContext 0.06
69 TestFunctional/serial/KubectlGetPods 0.09
72 TestFunctional/serial/CacheCmd/cache/add_remote 4
73 TestFunctional/serial/CacheCmd/cache/add_local 1.33
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.2
78 TestFunctional/serial/CacheCmd/cache/delete 0.13
79 TestFunctional/serial/MinikubeKubectlCmd 0.15
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
81 TestFunctional/serial/ExtraConfig 43.25
82 TestFunctional/serial/ComponentHealth 0.12
83 TestFunctional/serial/LogsCmd 1.7
84 TestFunctional/serial/LogsFileCmd 1.69
85 TestFunctional/serial/InvalidService 4.49
87 TestFunctional/parallel/ConfigCmd 0.45
88 TestFunctional/parallel/DashboardCmd 9.07
89 TestFunctional/parallel/DryRun 0.48
90 TestFunctional/parallel/InternationalLanguage 0.2
91 TestFunctional/parallel/StatusCmd 1
95 TestFunctional/parallel/ServiceCmdConnect 8.62
96 TestFunctional/parallel/AddonsCmd 0.14
97 TestFunctional/parallel/PersistentVolumeClaim 24.75
99 TestFunctional/parallel/SSHCmd 0.55
100 TestFunctional/parallel/CpCmd 2.09
102 TestFunctional/parallel/FileSync 0.35
103 TestFunctional/parallel/CertSync 2.16
107 TestFunctional/parallel/NodeLabels 0.11
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.8
111 TestFunctional/parallel/License 0.22
112 TestFunctional/parallel/Version/short 0.1
113 TestFunctional/parallel/Version/components 1.3
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.28
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
118 TestFunctional/parallel/ImageCommands/ImageBuild 3.8
119 TestFunctional/parallel/ImageCommands/Setup 0.76
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.41
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.41
125 TestFunctional/parallel/ServiceCmd/DeployApp 11.29
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.75
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.44
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.59
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.74
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.44
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.32
136 TestFunctional/parallel/ServiceCmd/List 0.49
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
139 TestFunctional/parallel/ServiceCmd/Format 0.39
140 TestFunctional/parallel/ServiceCmd/URL 0.4
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
148 TestFunctional/parallel/ProfileCmd/profile_list 0.4
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
150 TestFunctional/parallel/MountCmd/any-port 7.66
151 TestFunctional/parallel/MountCmd/specific-port 2.08
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.22
153 TestFunctional/delete_echo-server_images 0.05
154 TestFunctional/delete_my-image_image 0.01
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 115.87
160 TestMultiControlPlane/serial/DeployApp 30.38
161 TestMultiControlPlane/serial/PingHostFromPods 1.68
162 TestMultiControlPlane/serial/AddWorkerNode 21.64
163 TestMultiControlPlane/serial/NodeLabels 0.1
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.79
165 TestMultiControlPlane/serial/CopyFile 19.46
166 TestMultiControlPlane/serial/StopSecondaryNode 13.04
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.6
168 TestMultiControlPlane/serial/RestartSecondaryNode 18.31
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.82
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 141.52
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.74
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.58
173 TestMultiControlPlane/serial/StopCluster 36.1
174 TestMultiControlPlane/serial/RestartCluster 77.89
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
176 TestMultiControlPlane/serial/AddSecondaryNode 44.71
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.75
181 TestJSONOutput/start/Command 50.16
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.75
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.66
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.72
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.23
206 TestKicCustomNetwork/create_custom_network 38.89
207 TestKicCustomNetwork/use_default_bridge_network 36.33
208 TestKicExistingNetwork 33.71
209 TestKicCustomSubnet 33.22
210 TestKicStaticIP 36.94
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 67.58
215 TestMountStart/serial/StartWithMountFirst 9
216 TestMountStart/serial/VerifyMountFirst 0.25
217 TestMountStart/serial/StartWithMountSecond 8.95
218 TestMountStart/serial/VerifyMountSecond 0.27
219 TestMountStart/serial/DeleteFirst 1.62
220 TestMountStart/serial/VerifyMountPostDelete 0.26
221 TestMountStart/serial/Stop 1.21
222 TestMountStart/serial/RestartStopped 8.23
223 TestMountStart/serial/VerifyMountPostStop 0.26
226 TestMultiNode/serial/FreshStart2Nodes 96.51
227 TestMultiNode/serial/DeployApp2Nodes 16.26
228 TestMultiNode/serial/PingHostFrom2Pods 1.03
229 TestMultiNode/serial/AddNode 16.07
230 TestMultiNode/serial/MultiNodeLabels 0.09
231 TestMultiNode/serial/ProfileList 0.35
232 TestMultiNode/serial/CopyFile 10.16
233 TestMultiNode/serial/StopNode 2.29
234 TestMultiNode/serial/StartAfterStop 10.23
235 TestMultiNode/serial/RestartKeepsNodes 103.03
236 TestMultiNode/serial/DeleteNode 5.58
237 TestMultiNode/serial/StopMultiNode 24.02
238 TestMultiNode/serial/RestartMultiNode 57.03
239 TestMultiNode/serial/ValidateNameConflict 34.52
244 TestPreload 112.26
246 TestScheduledStopUnix 110.7
249 TestInsufficientStorage 13.7
250 TestRunningBinaryUpgrade 82.18
252 TestKubernetesUpgrade 353.59
253 TestMissingContainerUpgrade 177.74
255 TestPause/serial/Start 61.96
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
258 TestNoKubernetes/serial/StartWithK8s 41.44
259 TestNoKubernetes/serial/StartWithStopK8s 17.72
260 TestNoKubernetes/serial/Start 9.09
261 TestPause/serial/SecondStartNoReconfiguration 7.02
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
263 TestNoKubernetes/serial/ProfileList 1.11
264 TestPause/serial/Pause 0.95
265 TestNoKubernetes/serial/Stop 1.29
266 TestPause/serial/VerifyStatus 0.67
267 TestPause/serial/Unpause 0.76
268 TestNoKubernetes/serial/StartNoArgs 7.12
269 TestPause/serial/PauseAgain 1.27
270 TestPause/serial/DeletePaused 2.65
271 TestPause/serial/VerifyDeletedResources 0.42
272 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.26
273 TestStoppedBinaryUpgrade/Setup 0.92
274 TestStoppedBinaryUpgrade/Upgrade 77.57
275 TestStoppedBinaryUpgrade/MinikubeLogs 1.09
290 TestNetworkPlugins/group/false 4.74
295 TestStartStop/group/old-k8s-version/serial/FirstStart 147.36
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.97
298 TestStartStop/group/no-preload/serial/FirstStart 78.36
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.33
300 TestStartStop/group/old-k8s-version/serial/Stop 13.9
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
302 TestStartStop/group/old-k8s-version/serial/SecondStart 149.42
303 TestStartStop/group/no-preload/serial/DeployApp 10.38
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.35
305 TestStartStop/group/no-preload/serial/Stop 12.11
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.24
307 TestStartStop/group/no-preload/serial/SecondStart 267.11
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
311 TestStartStop/group/old-k8s-version/serial/Pause 3.01
313 TestStartStop/group/embed-certs/serial/FirstStart 80.42
314 TestStartStop/group/embed-certs/serial/DeployApp 8.35
315 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.26
316 TestStartStop/group/embed-certs/serial/Stop 12.04
317 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
318 TestStartStop/group/embed-certs/serial/SecondStart 288.46
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
322 TestStartStop/group/no-preload/serial/Pause 3.16
324 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 49.73
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.35
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.2
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.04
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 267.72
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
333 TestStartStop/group/embed-certs/serial/Pause 3.11
335 TestStartStop/group/newest-cni/serial/FirstStart 40.12
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.19
338 TestStartStop/group/newest-cni/serial/Stop 1.24
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
340 TestStartStop/group/newest-cni/serial/SecondStart 16
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
344 TestStartStop/group/newest-cni/serial/Pause 3.2
345 TestNetworkPlugins/group/auto/Start 52.87
346 TestNetworkPlugins/group/auto/KubeletFlags 0.29
347 TestNetworkPlugins/group/auto/NetCatPod 10.28
348 TestNetworkPlugins/group/auto/DNS 0.21
349 TestNetworkPlugins/group/auto/Localhost 0.17
350 TestNetworkPlugins/group/auto/HairPin 0.2
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.15
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.3
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.37
355 TestNetworkPlugins/group/kindnet/Start 88.16
356 TestNetworkPlugins/group/calico/Start 69.06
357 TestNetworkPlugins/group/calico/ControllerPod 6.01
358 TestNetworkPlugins/group/calico/KubeletFlags 0.29
359 TestNetworkPlugins/group/calico/NetCatPod 10.26
360 TestNetworkPlugins/group/calico/DNS 0.2
361 TestNetworkPlugins/group/calico/Localhost 0.16
362 TestNetworkPlugins/group/calico/HairPin 0.19
363 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
365 TestNetworkPlugins/group/kindnet/NetCatPod 10.49
366 TestNetworkPlugins/group/kindnet/DNS 0.25
367 TestNetworkPlugins/group/kindnet/Localhost 0.21
368 TestNetworkPlugins/group/kindnet/HairPin 0.22
369 TestNetworkPlugins/group/custom-flannel/Start 59.09
370 TestNetworkPlugins/group/enable-default-cni/Start 76.68
371 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
372 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.38
373 TestNetworkPlugins/group/custom-flannel/DNS 0.18
374 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
375 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
376 TestNetworkPlugins/group/flannel/Start 53.59
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.34
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.38
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.27
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.26
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
382 TestNetworkPlugins/group/bridge/Start 78.67
383 TestNetworkPlugins/group/flannel/ControllerPod 6.01
384 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
385 TestNetworkPlugins/group/flannel/NetCatPod 10.37
386 TestNetworkPlugins/group/flannel/DNS 0.33
387 TestNetworkPlugins/group/flannel/Localhost 0.21
388 TestNetworkPlugins/group/flannel/HairPin 0.22
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.27
390 TestNetworkPlugins/group/bridge/NetCatPod 10.27
391 TestNetworkPlugins/group/bridge/DNS 0.19
392 TestNetworkPlugins/group/bridge/Localhost 0.18
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (11.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-808336 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-808336 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (11.848522171s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-808336
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-808336: exit status 85 (80.994596ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-808336 | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |          |
	|         | -p download-only-808336        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:35:35
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:35:35.764893 3198657 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:35:35.765019 3198657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:35:35.765030 3198657 out.go:358] Setting ErrFile to fd 2...
	I0915 06:35:35.765035 3198657 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:35:35.765270 3198657 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	W0915 06:35:35.765406 3198657 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19644-3193270/.minikube/config/config.json: open /home/jenkins/minikube-integration/19644-3193270/.minikube/config/config.json: no such file or directory
	I0915 06:35:35.765794 3198657 out.go:352] Setting JSON to true
	I0915 06:35:35.766705 3198657 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":51487,"bootTime":1726330649,"procs":196,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0915 06:35:35.766782 3198657 start.go:139] virtualization:  
	I0915 06:35:35.770544 3198657 out.go:97] [download-only-808336] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0915 06:35:35.770719 3198657 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball: no such file or directory
	I0915 06:35:35.770856 3198657 notify.go:220] Checking for updates...
	I0915 06:35:35.773740 3198657 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:35:35.777241 3198657 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:35:35.780025 3198657 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	I0915 06:35:35.782630 3198657 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	I0915 06:35:35.785317 3198657 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0915 06:35:35.790465 3198657 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:35:35.790766 3198657 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:35:35.813424 3198657 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:35:35.813556 3198657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:35:35.873143 3198657 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:35:35.863530788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:35:35.873270 3198657 docker.go:318] overlay module found
	I0915 06:35:35.876018 3198657 out.go:97] Using the docker driver based on user configuration
	I0915 06:35:35.876053 3198657 start.go:297] selected driver: docker
	I0915 06:35:35.876062 3198657 start.go:901] validating driver "docker" against <nil>
	I0915 06:35:35.876181 3198657 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:35:35.926843 3198657 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:35:35.917043168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:35:35.927077 3198657 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:35:35.927383 3198657 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0915 06:35:35.927550 3198657 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:35:35.930287 3198657 out.go:169] Using Docker driver with root privileges
	I0915 06:35:35.932905 3198657 cni.go:84] Creating CNI manager for ""
	I0915 06:35:35.932976 3198657 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0915 06:35:35.932990 3198657 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:35:35.933081 3198657 start.go:340] cluster config:
	{Name:download-only-808336 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-808336 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:35:35.935849 3198657 out.go:97] Starting "download-only-808336" primary control-plane node in "download-only-808336" cluster
	I0915 06:35:35.935875 3198657 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0915 06:35:35.938401 3198657 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:35:35.938429 3198657 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0915 06:35:35.938530 3198657 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:35:35.954228 3198657 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:35:35.954416 3198657 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:35:35.954513 3198657 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:35:35.999709 3198657 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0915 06:35:35.999740 3198657 cache.go:56] Caching tarball of preloaded images
	I0915 06:35:35.999919 3198657 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0915 06:35:36.005672 3198657 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0915 06:35:36.005737 3198657 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0915 06:35:36.093193 3198657 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0915 06:35:40.647318 3198657 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0915 06:35:40.647423 3198657 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0915 06:35:41.802471 3198657 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0915 06:35:41.802908 3198657 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/download-only-808336/config.json ...
	I0915 06:35:41.802944 3198657 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/download-only-808336/config.json: {Name:mkdedc3236ab05e0ca04b4ec441a8a74a43fac23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:35:41.803155 3198657 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0915 06:35:41.803347 3198657 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-808336 host does not exist
	  To start a cluster, run: "minikube start -p download-only-808336"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-808336
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (5.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-841381 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-841381 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.137313229s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (5.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-841381
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-841381: exit status 85 (80.527346ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-808336 | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |                     |
	|         | -p download-only-808336        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| delete  | -p download-only-808336        | download-only-808336 | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC | 15 Sep 24 06:35 UTC |
	| start   | -o=json --download-only        | download-only-841381 | jenkins | v1.34.0 | 15 Sep 24 06:35 UTC |                     |
	|         | -p download-only-841381        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/15 06:35:48
	Running on machine: ip-172-31-29-130
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0915 06:35:48.038285 3198858 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:35:48.038432 3198858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:35:48.038443 3198858 out.go:358] Setting ErrFile to fd 2...
	I0915 06:35:48.038449 3198858 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:35:48.038702 3198858 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 06:35:48.039148 3198858 out.go:352] Setting JSON to true
	I0915 06:35:48.040112 3198858 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":51499,"bootTime":1726330649,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0915 06:35:48.040189 3198858 start.go:139] virtualization:  
	I0915 06:35:48.043506 3198858 out.go:97] [download-only-841381] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:35:48.043794 3198858 notify.go:220] Checking for updates...
	I0915 06:35:48.046488 3198858 out.go:169] MINIKUBE_LOCATION=19644
	I0915 06:35:48.049545 3198858 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:35:48.052248 3198858 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	I0915 06:35:48.055041 3198858 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	I0915 06:35:48.057777 3198858 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0915 06:35:48.063271 3198858 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0915 06:35:48.063567 3198858 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:35:48.086593 3198858 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:35:48.086732 3198858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:35:48.147698 3198858 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:35:48.138312497 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:35:48.147819 3198858 docker.go:318] overlay module found
	I0915 06:35:48.150610 3198858 out.go:97] Using the docker driver based on user configuration
	I0915 06:35:48.150642 3198858 start.go:297] selected driver: docker
	I0915 06:35:48.150649 3198858 start.go:901] validating driver "docker" against <nil>
	I0915 06:35:48.150776 3198858 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:35:48.205828 3198858 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-15 06:35:48.195959407 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:35:48.206061 3198858 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0915 06:35:48.206350 3198858 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0915 06:35:48.206511 3198858 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0915 06:35:48.209276 3198858 out.go:169] Using Docker driver with root privileges
	I0915 06:35:48.211795 3198858 cni.go:84] Creating CNI manager for ""
	I0915 06:35:48.211856 3198858 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0915 06:35:48.211870 3198858 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0915 06:35:48.211946 3198858 start.go:340] cluster config:
	{Name:download-only-841381 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-841381 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:35:48.214734 3198858 out.go:97] Starting "download-only-841381" primary control-plane node in "download-only-841381" cluster
	I0915 06:35:48.214754 3198858 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0915 06:35:48.217338 3198858 out.go:97] Pulling base image v0.0.45-1726358845-19644 ...
	I0915 06:35:48.217364 3198858 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0915 06:35:48.217541 3198858 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local docker daemon
	I0915 06:35:48.233075 3198858 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 to local cache
	I0915 06:35:48.233190 3198858 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory
	I0915 06:35:48.233213 3198858 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 in local cache directory, skipping pull
	I0915 06:35:48.233223 3198858 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 exists in cache, skipping pull
	I0915 06:35:48.233230 3198858 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 as a tarball
	I0915 06:35:48.276223 3198858 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0915 06:35:48.276247 3198858 cache.go:56] Caching tarball of preloaded images
	I0915 06:35:48.276399 3198858 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0915 06:35:48.279276 3198858 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0915 06:35:48.279301 3198858 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0915 06:35:48.370523 3198858 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0915 06:35:51.597051 3198858 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0915 06:35:51.597160 3198858 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0915 06:35:52.463252 3198858 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0915 06:35:52.463657 3198858 profile.go:143] Saving config to /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/download-only-841381/config.json ...
	I0915 06:35:52.463704 3198858 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/download-only-841381/config.json: {Name:mk861749f8ef7531cc69bc148f8e55718e2526c4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0915 06:35:52.463904 3198858 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0915 06:35:52.464058 3198858 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19644-3193270/.minikube/cache/linux/arm64/v1.31.1/kubectl
	
	
	* The control-plane node download-only-841381 host does not exist
	  To start a cluster, run: "minikube start -p download-only-841381"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-841381
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-625101 --alsologtostderr --binary-mirror http://127.0.0.1:36845 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-625101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-625101
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-686490
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-686490: exit status 85 (73.853872ms)

                                                
                                                
-- stdout --
	* Profile "addons-686490" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-686490"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-686490
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-686490: exit status 85 (70.999024ms)

                                                
                                                
-- stdout --
	* Profile "addons-686490" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-686490"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (217.81s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-686490 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-686490 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m37.81218836s)
--- PASS: TestAddons/Setup (217.81s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-686490 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-686490 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.005225ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-shzcc" [25bbd5d5-6b38-48ac-b11e-55a962241296] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.00732912s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-r5v2g" [58cfd80d-2cf0-4456-841f-2268a328e3a0] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00454645s
addons_test.go:342: (dbg) Run:  kubectl --context addons-686490 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-686490 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-686490 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.300646976s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 ip
2024/09/15 06:43:28 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.31s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.75s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-686490 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-686490 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-686490 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [edf35fff-7cbb-4e5c-baac-eba4ca15ccfe] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [edf35fff-7cbb-4e5c-baac-eba4ca15ccfe] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003359193s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-686490 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-686490 addons disable ingress-dns --alsologtostderr -v=1: (1.20460352s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-686490 addons disable ingress --alsologtostderr -v=1: (7.858064726s)
--- PASS: TestAddons/parallel/Ingress (20.75s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-jhcdf" [f2d35f18-8bc4-4f2d-95fe-ec00bf377399] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.008013621s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-686490
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-686490: (6.016022308s)
--- PASS: TestAddons/parallel/InspektorGadget (11.03s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 6.15696ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-fpnkx" [0ac54330-326c-4b01-a7a3-f324be97b1bd] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003780069s
addons_test.go:417: (dbg) Run:  kubectl --context addons-686490 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.9s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 8.638986ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-686490 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-686490 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [ea8fdff1-a271-4804-b357-7cb295e04b6c] Pending
helpers_test.go:344: "task-pv-pod" [ea8fdff1-a271-4804-b357-7cb295e04b6c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [ea8fdff1-a271-4804-b357-7cb295e04b6c] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003873474s
addons_test.go:590: (dbg) Run:  kubectl --context addons-686490 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-686490 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-686490 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-686490 delete pod task-pv-pod
addons_test.go:606: (dbg) Run:  kubectl --context addons-686490 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-686490 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-686490 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [1eda7d83-8ccc-4d03-8fe2-9254024facb0] Pending
helpers_test.go:344: "task-pv-pod-restore" [1eda7d83-8ccc-4d03-8fe2-9254024facb0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [1eda7d83-8ccc-4d03-8fe2-9254024facb0] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003455966s
addons_test.go:632: (dbg) Run:  kubectl --context addons-686490 delete pod task-pv-pod-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-686490 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-686490 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-686490 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.053490756s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.90s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-686490 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-686490 --alsologtostderr -v=1: (1.053484556s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-57fb76fcdb-tgrbt" [a17c85dc-2980-49e1-b5e3-db09e3beab49] Pending
helpers_test.go:344: "headlamp-57fb76fcdb-tgrbt" [a17c85dc-2980-49e1-b5e3-db09e3beab49] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-57fb76fcdb-tgrbt" [a17c85dc-2980-49e1-b5e3-db09e3beab49] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004217441s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-686490 addons disable headlamp --alsologtostderr -v=1: (5.80448506s)
--- PASS: TestAddons/parallel/Headlamp (18.86s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-tgszx" [b30a5d14-a8a2-4214-8243-c9ff33020fdd] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004920189s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-686490
--- PASS: TestAddons/parallel/CloudSpanner (5.70s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (52.76s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-686490 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-686490 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-686490 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [a8bf8f86-7558-41e3-b0db-02a11ae2913c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [a8bf8f86-7558-41e3-b0db-02a11ae2913c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [a8bf8f86-7558-41e3-b0db-02a11ae2913c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003634974s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-686490 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 ssh "cat /opt/local-path-provisioner/pvc-d2af47d8-9031-413a-a107-73089e5d7e03_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-686490 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-686490 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1029: (dbg) Done: out/minikube-linux-arm64 -p addons-686490 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.493758438s)
--- PASS: TestAddons/parallel/LocalPath (52.76s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-w56rc" [ba99ac53-6596-49f5-a42f-38380f636f28] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.009678943s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-686490
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-dms56" [bc3a9cc9-46db-4468-a724-4cb4cc788660] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005578244s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-686490 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-686490 addons disable yakd --alsologtostderr -v=1: (5.864781732s)
--- PASS: TestAddons/parallel/Yakd (10.87s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-686490
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-686490: (12.05882729s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-686490
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-686490
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-686490
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (36.14s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-200132 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-200132 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.493742463s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-200132 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-200132 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-200132 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-200132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-200132
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-200132: (1.96864168s)
--- PASS: TestCertOptions (36.14s)

                                                
                                    
x
+
TestCertExpiration (224.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-057266 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0915 07:22:36.035579 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-057266 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (34.82011266s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-057266 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-057266 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.543409683s)
helpers_test.go:175: Cleaning up "cert-expiration-057266" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-057266
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-057266: (2.271491029s)
--- PASS: TestCertExpiration (224.64s)

                                                
                                    
x
+
TestForceSystemdFlag (40.95s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-909328 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-909328 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (38.263957311s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-909328 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-909328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-909328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-909328: (2.128410448s)
--- PASS: TestForceSystemdFlag (40.95s)

                                                
                                    
x
+
TestForceSystemdEnv (43.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-333957 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-333957 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.619547105s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-333957 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-333957" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-333957
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-333957: (2.571688472s)
--- PASS: TestForceSystemdEnv (43.60s)

                                                
                                    
x
+
TestDockerEnvContainerd (45.9s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-056722 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-056722 --driver=docker  --container-runtime=containerd: (30.153371758s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-056722"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-swIkpQfpX4Wp/agent.3218703" SSH_AGENT_PID="3218704" DOCKER_HOST=ssh://docker@127.0.0.1:35882 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-swIkpQfpX4Wp/agent.3218703" SSH_AGENT_PID="3218704" DOCKER_HOST=ssh://docker@127.0.0.1:35882 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-swIkpQfpX4Wp/agent.3218703" SSH_AGENT_PID="3218704" DOCKER_HOST=ssh://docker@127.0.0.1:35882 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.271268725s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-swIkpQfpX4Wp/agent.3218703" SSH_AGENT_PID="3218704" DOCKER_HOST=ssh://docker@127.0.0.1:35882 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-056722" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-056722
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-056722: (2.010467334s)
--- PASS: TestDockerEnvContainerd (45.90s)

                                                
                                    
x
+
TestErrorSpam/setup (30.96s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-221906 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-221906 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-221906 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-221906 --driver=docker  --container-runtime=containerd: (30.95694015s)
--- PASS: TestErrorSpam/setup (30.96s)

                                                
                                    
x
+
TestErrorSpam/start (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 start --dry-run
--- PASS: TestErrorSpam/start (0.79s)

                                                
                                    
x
+
TestErrorSpam/status (1.04s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 status
--- PASS: TestErrorSpam/status (1.04s)

                                                
                                    
x
+
TestErrorSpam/pause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 pause
--- PASS: TestErrorSpam/pause (1.82s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (2.15s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 stop: (1.928808898s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-221906 --log_dir /tmp/nospam-221906 stop
--- PASS: TestErrorSpam/stop (2.15s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19644-3193270/.minikube/files/etc/test/nested/copy/3198652/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (51.74s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-840758 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-840758 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (51.734706872s)
--- PASS: TestFunctional/serial/StartWithProxy (51.74s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-840758 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-840758 --alsologtostderr -v=8: (5.993392008s)
functional_test.go:663: soft start took 6.00325214s for "functional-840758" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-840758 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 cache add registry.k8s.io/pause:3.1: (1.399010905s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 cache add registry.k8s.io/pause:3.3: (1.35712615s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 cache add registry.k8s.io/pause:latest: (1.246229977s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.00s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-840758 /tmp/TestFunctionalserialCacheCmdcacheadd_local367276532/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cache add minikube-local-cache-test:functional-840758
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cache delete minikube-local-cache-test:functional-840758
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-840758
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (317.975825ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 cache reload: (1.253865375s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 kubectl -- --context functional-840758 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-840758 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.25s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-840758 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-840758 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.25096914s)
functional_test.go:761: restart took 43.25106198s for "functional-840758" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.25s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-840758 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.12s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.7s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 logs: (1.702868812s)
--- PASS: TestFunctional/serial/LogsCmd (1.70s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 logs --file /tmp/TestFunctionalserialLogsFileCmd1911028521/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 logs --file /tmp/TestFunctionalserialLogsFileCmd1911028521/001/logs.txt: (1.688748437s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.49s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-840758 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-840758
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-840758: exit status 115 (623.259662ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30401 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-840758 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.49s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 config get cpus: exit status 14 (72.777905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 config get cpus: exit status 14 (78.491435ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (9.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-840758 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-840758 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3235744: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (9.07s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-840758 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-840758 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (182.29854ms)

                                                
                                                
-- stdout --
	* [functional-840758] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:49:25.601656 3235337 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:49:25.601850 3235337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:49:25.601881 3235337 out.go:358] Setting ErrFile to fd 2...
	I0915 06:49:25.601902 3235337 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:49:25.602209 3235337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 06:49:25.602632 3235337 out.go:352] Setting JSON to false
	I0915 06:49:25.604279 3235337 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":52317,"bootTime":1726330649,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0915 06:49:25.604414 3235337 start.go:139] virtualization:  
	I0915 06:49:25.608001 3235337 out.go:177] * [functional-840758] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 06:49:25.611791 3235337 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:49:25.611943 3235337 notify.go:220] Checking for updates...
	I0915 06:49:25.620566 3235337 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:49:25.623475 3235337 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	I0915 06:49:25.626015 3235337 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	I0915 06:49:25.628862 3235337 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:49:25.631510 3235337 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:49:25.634758 3235337 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 06:49:25.635344 3235337 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:49:25.662885 3235337 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:49:25.663053 3235337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:49:25.716311 3235337 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:49:25.706505591 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:49:25.716427 3235337 docker.go:318] overlay module found
	I0915 06:49:25.719300 3235337 out.go:177] * Using the docker driver based on existing profile
	I0915 06:49:25.721770 3235337 start.go:297] selected driver: docker
	I0915 06:49:25.721790 3235337 start.go:901] validating driver "docker" against &{Name:functional-840758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-840758 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:49:25.721897 3235337 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:49:25.725369 3235337 out.go:201] 
	W0915 06:49:25.727991 3235337 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0915 06:49:25.730854 3235337 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-840758 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-840758 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-840758 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (204.268503ms)

                                                
                                                
-- stdout --
	* [functional-840758] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:49:26.097825 3235456 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:49:26.098120 3235456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:49:26.098152 3235456 out.go:358] Setting ErrFile to fd 2...
	I0915 06:49:26.098173 3235456 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:49:26.099762 3235456 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 06:49:26.100324 3235456 out.go:352] Setting JSON to false
	I0915 06:49:26.101704 3235456 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":52317,"bootTime":1726330649,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0915 06:49:26.101822 3235456 start.go:139] virtualization:  
	I0915 06:49:26.105749 3235456 out.go:177] * [functional-840758] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0915 06:49:26.108678 3235456 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 06:49:26.108687 3235456 notify.go:220] Checking for updates...
	I0915 06:49:26.113849 3235456 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 06:49:26.116444 3235456 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	I0915 06:49:26.119214 3235456 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	I0915 06:49:26.121812 3235456 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 06:49:26.124497 3235456 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 06:49:26.127862 3235456 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 06:49:26.128713 3235456 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 06:49:26.152466 3235456 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 06:49:26.152592 3235456 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:49:26.216215 3235456 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-15 06:49:26.206318519 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:49:26.216337 3235456 docker.go:318] overlay module found
	I0915 06:49:26.219419 3235456 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0915 06:49:26.222120 3235456 start.go:297] selected driver: docker
	I0915 06:49:26.222141 3235456 start.go:901] validating driver "docker" against &{Name:functional-840758 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726358845-19644@sha256:4c67a32a16c2d4f824f00267c172fd225757ca75441e363d925dc9583137f0b0 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-840758 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0915 06:49:26.222267 3235456 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 06:49:26.225931 3235456 out.go:201] 
	W0915 06:49:26.228637 3235456 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0915 06:49:26.231432 3235456 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-840758 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-840758 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-nw5bv" [25211265-305f-4821-bf63-106f2f15c455] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-nw5bv" [25211265-305f-4821-bf63-106f2f15c455] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.007102359s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30582
functional_test.go:1675: http://192.168.49.2:30582: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-nw5bv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30582
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [67f919ea-b739-4996-b905-87f539e5234a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004090811s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-840758 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-840758 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-840758 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-840758 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2c14e47e-d2d1-47ae-a39b-1c743a83c69b] Pending
helpers_test.go:344: "sp-pod" [2c14e47e-d2d1-47ae-a39b-1c743a83c69b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2c14e47e-d2d1-47ae-a39b-1c743a83c69b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00414665s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-840758 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-840758 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-840758 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1d217c5c-3bff-4cb6-9777-6c616469f705] Pending
helpers_test.go:344: "sp-pod" [1d217c5c-3bff-4cb6-9777-6c616469f705] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1d217c5c-3bff-4cb6-9777-6c616469f705] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004146006s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-840758 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.75s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh -n functional-840758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cp functional-840758:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3446159132/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh -n functional-840758 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh -n functional-840758 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/3198652/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo cat /etc/test/nested/copy/3198652/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/3198652.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo cat /etc/ssl/certs/3198652.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/3198652.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo cat /usr/share/ca-certificates/3198652.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/31986522.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo cat /etc/ssl/certs/31986522.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/31986522.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo cat /usr/share/ca-certificates/31986522.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-840758 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 ssh "sudo systemctl is-active docker": exit status 1 (376.630628ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 ssh "sudo systemctl is-active crio": exit status 1 (420.659439ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 version -o=json --components: (1.303742046s)
--- PASS: TestFunctional/parallel/Version/components (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-840758 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-840758
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-840758
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-840758 image ls --format short --alsologtostderr:
I0915 06:49:28.763832 3236004 out.go:345] Setting OutFile to fd 1 ...
I0915 06:49:28.763994 3236004 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:28.764007 3236004 out.go:358] Setting ErrFile to fd 2...
I0915 06:49:28.764012 3236004 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:28.764298 3236004 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
I0915 06:49:28.765029 3236004 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:28.765154 3236004 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:28.765679 3236004 cli_runner.go:164] Run: docker container inspect functional-840758 --format={{.State.Status}}
I0915 06:49:28.785522 3236004 ssh_runner.go:195] Run: systemctl --version
I0915 06:49:28.785587 3236004 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-840758
I0915 06:49:28.801890 3236004 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35892 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/functional-840758/id_rsa Username:docker}
I0915 06:49:28.895762 3236004 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-840758 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-840758  | sha256:847e66 | 990B   |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| docker.io/kicbase/echo-server               | functional-840758  | sha256:ce2d2c | 2.17MB |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| localhost/my-image                          | functional-840758  | sha256:daac07 | 831kB  |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-840758 image ls --format table --alsologtostderr:
I0915 06:49:33.378449 3236385 out.go:345] Setting OutFile to fd 1 ...
I0915 06:49:33.378658 3236385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:33.378671 3236385 out.go:358] Setting ErrFile to fd 2...
I0915 06:49:33.378676 3236385 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:33.378931 3236385 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
I0915 06:49:33.379640 3236385 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:33.379834 3236385 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:33.380497 3236385 cli_runner.go:164] Run: docker container inspect functional-840758 --format={{.State.Status}}
I0915 06:49:33.406781 3236385 ssh_runner.go:195] Run: systemctl --version
I0915 06:49:33.406840 3236385 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-840758
I0915 06:49:33.428520 3236385 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35892 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/functional-840758/id_rsa Username:docker}
I0915 06:49:33.523535 3236385 ssh_runner.go:195] Run: sudo crictl images --output json
E0915 06:49:33.616245 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:34.258262 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
2024/09/15 06:49:35 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls --format json --alsologtostderr
E0915 06:49:33.051141 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:33.132467 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:33.294385 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-840758 image ls --format json --alsologtostderr:
[{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:847e6618f427b2fd1ec6d3399e688e57dc50c21dabbeffc120f3a3bb1b001fe5","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-840758"],"size":"990"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606
be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba
374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-840758"],"size":"2173567"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256
:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:daac07b8e9d994775774fdd5d53cdc1a7bd92656df9087865a27f626bb81b2f4","repoDigests":[],"repoTags":["localhost/my-image:functional-840758"],"size":"830616"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629
afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"18507674"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-840758 image ls --format json --alsologtostderr:
I0915 06:49:33.098235 3236353 out.go:345] Setting OutFile to fd 1 ...
I0915 06:49:33.098466 3236353 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:33.098493 3236353 out.go:358] Setting ErrFile to fd 2...
I0915 06:49:33.098516 3236353 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:33.098799 3236353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
I0915 06:49:33.099574 3236353 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:33.099787 3236353 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:33.100314 3236353 cli_runner.go:164] Run: docker container inspect functional-840758 --format={{.State.Status}}
I0915 06:49:33.125430 3236353 ssh_runner.go:195] Run: systemctl --version
I0915 06:49:33.125479 3236353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-840758
I0915 06:49:33.148518 3236353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35892 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/functional-840758/id_rsa Username:docker}
I0915 06:49:33.243817 3236353 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-840758 image ls --format yaml --alsologtostderr:
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-840758
size: "2173567"
- id: sha256:847e6618f427b2fd1ec6d3399e688e57dc50c21dabbeffc120f3a3bb1b001fe5
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-840758
size: "990"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-840758 image ls --format yaml --alsologtostderr:
I0915 06:49:29.012325 3236036 out.go:345] Setting OutFile to fd 1 ...
I0915 06:49:29.012442 3236036 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:29.012452 3236036 out.go:358] Setting ErrFile to fd 2...
I0915 06:49:29.012458 3236036 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:29.012699 3236036 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
I0915 06:49:29.013347 3236036 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:29.013472 3236036 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:29.013953 3236036 cli_runner.go:164] Run: docker container inspect functional-840758 --format={{.State.Status}}
I0915 06:49:29.041490 3236036 ssh_runner.go:195] Run: systemctl --version
I0915 06:49:29.041566 3236036 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-840758
I0915 06:49:29.076846 3236036 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35892 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/functional-840758/id_rsa Username:docker}
I0915 06:49:29.172251 3236036 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 ssh pgrep buildkitd: exit status 1 (342.842849ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image build -t localhost/my-image:functional-840758 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 image build -t localhost/my-image:functional-840758 testdata/build --alsologtostderr: (3.177412825s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-840758 image build -t localhost/my-image:functional-840758 testdata/build --alsologtostderr:
I0915 06:49:29.622640 3236131 out.go:345] Setting OutFile to fd 1 ...
I0915 06:49:29.623618 3236131 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:29.623650 3236131 out.go:358] Setting ErrFile to fd 2...
I0915 06:49:29.623668 3236131 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0915 06:49:29.623928 3236131 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
I0915 06:49:29.624603 3236131 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:29.625698 3236131 config.go:182] Loaded profile config "functional-840758": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0915 06:49:29.626220 3236131 cli_runner.go:164] Run: docker container inspect functional-840758 --format={{.State.Status}}
I0915 06:49:29.645296 3236131 ssh_runner.go:195] Run: systemctl --version
I0915 06:49:29.645370 3236131 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-840758
I0915 06:49:29.665796 3236131 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35892 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/functional-840758/id_rsa Username:docker}
I0915 06:49:29.772964 3236131 build_images.go:161] Building image from path: /tmp/build.1095782846.tar
I0915 06:49:29.773031 3236131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0915 06:49:29.782625 3236131 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1095782846.tar
I0915 06:49:29.790836 3236131 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1095782846.tar: stat -c "%s %y" /var/lib/minikube/build/build.1095782846.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1095782846.tar': No such file or directory
I0915 06:49:29.790865 3236131 ssh_runner.go:362] scp /tmp/build.1095782846.tar --> /var/lib/minikube/build/build.1095782846.tar (3072 bytes)
I0915 06:49:29.832939 3236131 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1095782846
I0915 06:49:29.842588 3236131 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1095782846 -xf /var/lib/minikube/build/build.1095782846.tar
I0915 06:49:29.852625 3236131 containerd.go:394] Building image: /var/lib/minikube/build/build.1095782846
I0915 06:49:29.852706 3236131 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1095782846 --local dockerfile=/var/lib/minikube/build/build.1095782846 --output type=image,name=localhost/my-image:functional-840758
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.2s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:1f77a724da4e099c0f5148726baea5e3c052066222213107f0d0ff0afd881a79 0.0s done
#8 exporting config sha256:daac07b8e9d994775774fdd5d53cdc1a7bd92656df9087865a27f626bb81b2f4 0.0s done
#8 naming to localhost/my-image:functional-840758 done
#8 DONE 0.2s
I0915 06:49:32.703817 3236131 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1095782846 --local dockerfile=/var/lib/minikube/build/build.1095782846 --output type=image,name=localhost/my-image:functional-840758: (2.851081755s)
I0915 06:49:32.703891 3236131 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1095782846
I0915 06:49:32.716488 3236131 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1095782846.tar
I0915 06:49:32.734842 3236131 build_images.go:217] Built localhost/my-image:functional-840758 from /tmp/build.1095782846.tar
I0915 06:49:32.734870 3236131 build_images.go:133] succeeded building to: functional-840758
I0915 06:49:32.734875 3236131 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls
E0915 06:49:32.969290 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:32.976400 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:32.987782 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:33.009139 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-840758
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.76s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image load --daemon kicbase/echo-server:functional-840758 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 image load --daemon kicbase/echo-server:functional-840758 --alsologtostderr: (1.113151946s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image load --daemon kicbase/echo-server:functional-840758 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 image load --daemon kicbase/echo-server:functional-840758 --alsologtostderr: (1.081051218s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (11.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-840758 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-840758 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-8vhr6" [d766e5d0-fd95-4a1f-a628-369e0b9ab936] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-8vhr6" [d766e5d0-fd95-4a1f-a628-369e0b9ab936] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 11.013642192s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (11.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-840758
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image load --daemon kicbase/echo-server:functional-840758 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-840758 image load --daemon kicbase/echo-server:functional-840758 --alsologtostderr: (1.219937628s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image save kicbase/echo-server:functional-840758 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image rm kicbase/echo-server:functional-840758 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-840758
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 image save --daemon kicbase/echo-server:functional-840758 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-840758
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-840758 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-840758 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-840758 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-840758 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3232163: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-840758 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-840758 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [67b969b3-082e-4976-8e1d-47f0c642ec6f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [67b969b3-082e-4976-8e1d-47f0c642ec6f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003298911s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.32s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 service list -o json
functional_test.go:1494: Took "330.998155ms" to run "out/minikube-linux-arm64 -p functional-840758 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:32424
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:32424
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-840758 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.243.137 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-840758 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "336.586907ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "59.815511ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "335.936403ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "59.100689ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdany-port1461621789/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726382954595843513" to /tmp/TestFunctionalparallelMountCmdany-port1461621789/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726382954595843513" to /tmp/TestFunctionalparallelMountCmdany-port1461621789/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726382954595843513" to /tmp/TestFunctionalparallelMountCmdany-port1461621789/001/test-1726382954595843513
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (312.842791ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 15 06:49 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 15 06:49 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 15 06:49 test-1726382954595843513
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh cat /mount-9p/test-1726382954595843513
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-840758 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8a8d77ec-fd9b-48d5-8aee-c699808bb360] Pending
helpers_test.go:344: "busybox-mount" [8a8d77ec-fd9b-48d5-8aee-c699808bb360] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8a8d77ec-fd9b-48d5-8aee-c699808bb360] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [8a8d77ec-fd9b-48d5-8aee-c699808bb360] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003806519s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-840758 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdany-port1461621789/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdspecific-port2211240482/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (382.425113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdspecific-port2211240482/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-840758 ssh "sudo umount -f /mount-9p": exit status 1 (284.099899ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-840758 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdspecific-port2211240482/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2933360886/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2933360886/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2933360886/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-840758 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-840758 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2933360886/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2933360886/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-840758 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2933360886/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.22s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-840758
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-840758
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-840758
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (115.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-175574 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0915 06:49:38.101815 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:43.223976 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:49:53.465321 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:13.947272 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:50:54.908597 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-175574 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m55.014465558s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (115.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-175574 -- rollout status deployment/busybox: (27.255752683s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-2nggd -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xbczg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xmdx7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-2nggd -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xbczg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xmdx7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-2nggd -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xbczg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xmdx7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-2nggd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-2nggd -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xbczg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xbczg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xmdx7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-175574 -- exec busybox-7dff88458-xmdx7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-175574 -v=7 --alsologtostderr
E0915 06:52:16.830052 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-175574 -v=7 --alsologtostderr: (20.63128508s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr: (1.008989604s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-175574 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-175574 status --output json -v=7 --alsologtostderr: (1.012542059s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp testdata/cp-test.txt ha-175574:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2823778289/001/cp-test_ha-175574.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574:/home/docker/cp-test.txt ha-175574-m02:/home/docker/cp-test_ha-175574_ha-175574-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m02 "sudo cat /home/docker/cp-test_ha-175574_ha-175574-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574:/home/docker/cp-test.txt ha-175574-m03:/home/docker/cp-test_ha-175574_ha-175574-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m03 "sudo cat /home/docker/cp-test_ha-175574_ha-175574-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574:/home/docker/cp-test.txt ha-175574-m04:/home/docker/cp-test_ha-175574_ha-175574-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m04 "sudo cat /home/docker/cp-test_ha-175574_ha-175574-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp testdata/cp-test.txt ha-175574-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2823778289/001/cp-test_ha-175574-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m02:/home/docker/cp-test.txt ha-175574:/home/docker/cp-test_ha-175574-m02_ha-175574.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574 "sudo cat /home/docker/cp-test_ha-175574-m02_ha-175574.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m02:/home/docker/cp-test.txt ha-175574-m03:/home/docker/cp-test_ha-175574-m02_ha-175574-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m03 "sudo cat /home/docker/cp-test_ha-175574-m02_ha-175574-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m02:/home/docker/cp-test.txt ha-175574-m04:/home/docker/cp-test_ha-175574-m02_ha-175574-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m04 "sudo cat /home/docker/cp-test_ha-175574-m02_ha-175574-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp testdata/cp-test.txt ha-175574-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2823778289/001/cp-test_ha-175574-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m03:/home/docker/cp-test.txt ha-175574:/home/docker/cp-test_ha-175574-m03_ha-175574.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574 "sudo cat /home/docker/cp-test_ha-175574-m03_ha-175574.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m03:/home/docker/cp-test.txt ha-175574-m02:/home/docker/cp-test_ha-175574-m03_ha-175574-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m02 "sudo cat /home/docker/cp-test_ha-175574-m03_ha-175574-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m03:/home/docker/cp-test.txt ha-175574-m04:/home/docker/cp-test_ha-175574-m03_ha-175574-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m04 "sudo cat /home/docker/cp-test_ha-175574-m03_ha-175574-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp testdata/cp-test.txt ha-175574-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2823778289/001/cp-test_ha-175574-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m04:/home/docker/cp-test.txt ha-175574:/home/docker/cp-test_ha-175574-m04_ha-175574.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574 "sudo cat /home/docker/cp-test_ha-175574-m04_ha-175574.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m04:/home/docker/cp-test.txt ha-175574-m02:/home/docker/cp-test_ha-175574-m04_ha-175574-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m02 "sudo cat /home/docker/cp-test_ha-175574-m04_ha-175574-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 cp ha-175574-m04:/home/docker/cp-test.txt ha-175574-m03:/home/docker/cp-test_ha-175574-m04_ha-175574-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 ssh -n ha-175574-m03 "sudo cat /home/docker/cp-test_ha-175574-m04_ha-175574-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (19.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-175574 node stop m02 -v=7 --alsologtostderr: (12.245062075s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr: exit status 7 (792.64601ms)

                                                
                                                
-- stdout --
	ha-175574
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175574-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-175574-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-175574-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:53:00.324781 3252371 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:53:00.325064 3252371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:53:00.325099 3252371 out.go:358] Setting ErrFile to fd 2...
	I0915 06:53:00.325132 3252371 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:53:00.325465 3252371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 06:53:00.325712 3252371 out.go:352] Setting JSON to false
	I0915 06:53:00.325783 3252371 mustload.go:65] Loading cluster: ha-175574
	I0915 06:53:00.325867 3252371 notify.go:220] Checking for updates...
	I0915 06:53:00.327466 3252371 config.go:182] Loaded profile config "ha-175574": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 06:53:00.327611 3252371 status.go:255] checking status of ha-175574 ...
	I0915 06:53:00.329466 3252371 cli_runner.go:164] Run: docker container inspect ha-175574 --format={{.State.Status}}
	I0915 06:53:00.355225 3252371 status.go:330] ha-175574 host status = "Running" (err=<nil>)
	I0915 06:53:00.355252 3252371 host.go:66] Checking if "ha-175574" exists ...
	I0915 06:53:00.355581 3252371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-175574
	I0915 06:53:00.403546 3252371 host.go:66] Checking if "ha-175574" exists ...
	I0915 06:53:00.403919 3252371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:53:00.404093 3252371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-175574
	I0915 06:53:00.423944 3252371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35897 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/ha-175574/id_rsa Username:docker}
	I0915 06:53:00.520608 3252371 ssh_runner.go:195] Run: systemctl --version
	I0915 06:53:00.526199 3252371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:53:00.538970 3252371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 06:53:00.592859 3252371 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-15 06:53:00.58214963 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 06:53:00.593538 3252371 kubeconfig.go:125] found "ha-175574" server: "https://192.168.49.254:8443"
	I0915 06:53:00.593570 3252371 api_server.go:166] Checking apiserver status ...
	I0915 06:53:00.593613 3252371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:53:00.605633 3252371 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1445/cgroup
	I0915 06:53:00.615704 3252371 api_server.go:182] apiserver freezer: "10:freezer:/docker/9cd666e1098e33202ba72f333a1801a33b4bd4c1cd6d7bcd967d1966b1167b3c/kubepods/burstable/poddc41d7bff31941196c0ed47755c48ff9/bc80d25fa4fc7cb001ba590d411e24c97d478f4635babb7aa56b2dd6d36a4537"
	I0915 06:53:00.615782 3252371 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9cd666e1098e33202ba72f333a1801a33b4bd4c1cd6d7bcd967d1966b1167b3c/kubepods/burstable/poddc41d7bff31941196c0ed47755c48ff9/bc80d25fa4fc7cb001ba590d411e24c97d478f4635babb7aa56b2dd6d36a4537/freezer.state
	I0915 06:53:00.624904 3252371 api_server.go:204] freezer state: "THAWED"
	I0915 06:53:00.624951 3252371 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 06:53:00.633090 3252371 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 06:53:00.633120 3252371 status.go:422] ha-175574 apiserver status = Running (err=<nil>)
	I0915 06:53:00.633132 3252371 status.go:257] ha-175574 status: &{Name:ha-175574 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:53:00.633148 3252371 status.go:255] checking status of ha-175574-m02 ...
	I0915 06:53:00.633470 3252371 cli_runner.go:164] Run: docker container inspect ha-175574-m02 --format={{.State.Status}}
	I0915 06:53:00.650457 3252371 status.go:330] ha-175574-m02 host status = "Stopped" (err=<nil>)
	I0915 06:53:00.650480 3252371 status.go:343] host is not running, skipping remaining checks
	I0915 06:53:00.650487 3252371 status.go:257] ha-175574-m02 status: &{Name:ha-175574-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:53:00.650508 3252371 status.go:255] checking status of ha-175574-m03 ...
	I0915 06:53:00.650835 3252371 cli_runner.go:164] Run: docker container inspect ha-175574-m03 --format={{.State.Status}}
	I0915 06:53:00.671446 3252371 status.go:330] ha-175574-m03 host status = "Running" (err=<nil>)
	I0915 06:53:00.671480 3252371 host.go:66] Checking if "ha-175574-m03" exists ...
	I0915 06:53:00.671892 3252371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-175574-m03
	I0915 06:53:00.691398 3252371 host.go:66] Checking if "ha-175574-m03" exists ...
	I0915 06:53:00.691812 3252371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:53:00.691898 3252371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-175574-m03
	I0915 06:53:00.712558 3252371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35907 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/ha-175574-m03/id_rsa Username:docker}
	I0915 06:53:00.809257 3252371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:53:00.822253 3252371 kubeconfig.go:125] found "ha-175574" server: "https://192.168.49.254:8443"
	I0915 06:53:00.822286 3252371 api_server.go:166] Checking apiserver status ...
	I0915 06:53:00.822368 3252371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 06:53:00.833760 3252371 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1337/cgroup
	I0915 06:53:00.843629 3252371 api_server.go:182] apiserver freezer: "10:freezer:/docker/2cd269638ab9565db4fc996a1dfbae64b9a8ffb5a967c8b012573d8a112c6afa/kubepods/burstable/pod152674ad26dcb0760b2e77206a76333b/fe6b51feab40ecdd93161c2771e4d583b4c85b42c50cbbf75502cbee96ff311a"
	I0915 06:53:00.843703 3252371 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/2cd269638ab9565db4fc996a1dfbae64b9a8ffb5a967c8b012573d8a112c6afa/kubepods/burstable/pod152674ad26dcb0760b2e77206a76333b/fe6b51feab40ecdd93161c2771e4d583b4c85b42c50cbbf75502cbee96ff311a/freezer.state
	I0915 06:53:00.852203 3252371 api_server.go:204] freezer state: "THAWED"
	I0915 06:53:00.852232 3252371 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0915 06:53:00.860081 3252371 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0915 06:53:00.860120 3252371 status.go:422] ha-175574-m03 apiserver status = Running (err=<nil>)
	I0915 06:53:00.860138 3252371 status.go:257] ha-175574-m03 status: &{Name:ha-175574-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:53:00.860171 3252371 status.go:255] checking status of ha-175574-m04 ...
	I0915 06:53:00.860499 3252371 cli_runner.go:164] Run: docker container inspect ha-175574-m04 --format={{.State.Status}}
	I0915 06:53:00.877452 3252371 status.go:330] ha-175574-m04 host status = "Running" (err=<nil>)
	I0915 06:53:00.877476 3252371 host.go:66] Checking if "ha-175574-m04" exists ...
	I0915 06:53:00.877774 3252371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-175574-m04
	I0915 06:53:00.894668 3252371 host.go:66] Checking if "ha-175574-m04" exists ...
	I0915 06:53:00.895223 3252371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 06:53:00.895267 3252371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-175574-m04
	I0915 06:53:00.913409 3252371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35912 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/ha-175574-m04/id_rsa Username:docker}
	I0915 06:53:01.013593 3252371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 06:53:01.027749 3252371 status.go:257] ha-175574-m04 status: &{Name:ha-175574-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (18.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-175574 node start m02 -v=7 --alsologtostderr: (17.179934594s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr: (1.022273689s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (18.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-175574 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-175574 -v=7 --alsologtostderr
E0915 06:53:49.039550 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:49.045946 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:49.057446 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:49.078951 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:49.120345 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:49.201892 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:49.363477 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:49.685179 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:50.327405 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:51.609107 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:53:54.170581 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-175574 -v=7 --alsologtostderr: (37.384152386s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-175574 --wait=true -v=7 --alsologtostderr
E0915 06:53:59.292255 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:09.534460 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:30.020857 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:54:32.968306 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:55:00.672016 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:55:10.982278 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-175574 --wait=true -v=7 --alsologtostderr: (1m43.95250099s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-175574
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (141.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-175574 node delete m03 -v=7 --alsologtostderr: (9.802872401s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-175574 stop -v=7 --alsologtostderr: (35.992791524s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr: exit status 7 (111.564063ms)

                                                
                                                
-- stdout --
	ha-175574
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-175574-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-175574-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 06:56:29.627391 3266572 out.go:345] Setting OutFile to fd 1 ...
	I0915 06:56:29.627600 3266572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:56:29.627630 3266572 out.go:358] Setting ErrFile to fd 2...
	I0915 06:56:29.627650 3266572 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 06:56:29.627934 3266572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 06:56:29.628157 3266572 out.go:352] Setting JSON to false
	I0915 06:56:29.628220 3266572 mustload.go:65] Loading cluster: ha-175574
	I0915 06:56:29.628305 3266572 notify.go:220] Checking for updates...
	I0915 06:56:29.628739 3266572 config.go:182] Loaded profile config "ha-175574": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 06:56:29.628783 3266572 status.go:255] checking status of ha-175574 ...
	I0915 06:56:29.629375 3266572 cli_runner.go:164] Run: docker container inspect ha-175574 --format={{.State.Status}}
	I0915 06:56:29.647947 3266572 status.go:330] ha-175574 host status = "Stopped" (err=<nil>)
	I0915 06:56:29.647971 3266572 status.go:343] host is not running, skipping remaining checks
	I0915 06:56:29.647980 3266572 status.go:257] ha-175574 status: &{Name:ha-175574 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:56:29.648005 3266572 status.go:255] checking status of ha-175574-m02 ...
	I0915 06:56:29.648316 3266572 cli_runner.go:164] Run: docker container inspect ha-175574-m02 --format={{.State.Status}}
	I0915 06:56:29.673167 3266572 status.go:330] ha-175574-m02 host status = "Stopped" (err=<nil>)
	I0915 06:56:29.673194 3266572 status.go:343] host is not running, skipping remaining checks
	I0915 06:56:29.673202 3266572 status.go:257] ha-175574-m02 status: &{Name:ha-175574-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 06:56:29.673220 3266572 status.go:255] checking status of ha-175574-m04 ...
	I0915 06:56:29.673525 3266572 cli_runner.go:164] Run: docker container inspect ha-175574-m04 --format={{.State.Status}}
	I0915 06:56:29.691045 3266572 status.go:330] ha-175574-m04 host status = "Stopped" (err=<nil>)
	I0915 06:56:29.691069 3266572 status.go:343] host is not running, skipping remaining checks
	I0915 06:56:29.691077 3266572 status.go:257] ha-175574-m04 status: &{Name:ha-175574-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (77.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-175574 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0915 06:56:32.904720 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-175574 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m16.88892514s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (77.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (44.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-175574 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-175574 --control-plane -v=7 --alsologtostderr: (43.678333891s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-175574 status -v=7 --alsologtostderr: (1.036395629s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (44.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.75s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.16s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-468527 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0915 06:58:49.039136 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 06:59:16.746098 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-468527 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (50.156309147s)
--- PASS: TestJSONOutput/start/Command (50.16s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.75s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-468527 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.75s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-468527 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.72s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-468527 --output=json --user=testUser
E0915 06:59:32.967930 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-468527 --output=json --user=testUser: (5.723898649s)
--- PASS: TestJSONOutput/stop/Command (5.72s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-341006 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-341006 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (90.116902ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a9d39a4c-3dc2-49f5-bf7c-b01b0792b930","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-341006] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e540eb6f-69e1-4166-b1da-2b849aaf3b70","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"af7c2dad-5cb6-4a4b-9db2-a6621594fb03","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5efe8a3c-e2bb-4757-8064-fa3394696fab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig"}}
	{"specversion":"1.0","id":"3594ad7c-8e6b-464e-8bf3-25530193aa3b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube"}}
	{"specversion":"1.0","id":"20a364f6-c005-4ef1-a179-c062c44b682f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"426cfbad-bb93-4b9e-bdcc-b81be8d12095","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"a542d474-42e9-4bca-912c-019e21de9367","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-341006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-341006
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.89s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-304516 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-304516 --network=: (36.785813088s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-304516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-304516
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-304516: (2.084934664s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.89s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-141151 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-141151 --network=bridge: (34.302960596s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-141151" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-141151
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-141151: (1.990944463s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.33s)

                                                
                                    
x
+
TestKicExistingNetwork (33.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-427170 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-427170 --network=existing-network: (31.565886844s)
helpers_test.go:175: Cleaning up "existing-network-427170" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-427170
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-427170: (1.997195178s)
--- PASS: TestKicExistingNetwork (33.71s)

                                                
                                    
x
+
TestKicCustomSubnet (33.22s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-575292 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-575292 --subnet=192.168.60.0/24: (31.029836978s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-575292 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-575292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-575292
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-575292: (2.166647514s)
--- PASS: TestKicCustomSubnet (33.22s)

                                                
                                    
x
+
TestKicStaticIP (36.94s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-605380 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-605380 --static-ip=192.168.200.200: (34.629484023s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-605380 ip
helpers_test.go:175: Cleaning up "static-ip-605380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-605380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-605380: (2.16614851s)
--- PASS: TestKicStaticIP (36.94s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (67.58s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-825691 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-825691 --driver=docker  --container-runtime=containerd: (31.513036925s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-828662 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-828662 --driver=docker  --container-runtime=containerd: (30.401238897s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-825691
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-828662
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-828662" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-828662
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-828662: (2.03404025s)
helpers_test.go:175: Cleaning up "first-825691" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-825691
E0915 07:03:49.039135 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-825691: (2.320842606s)
--- PASS: TestMinikubeProfile (67.58s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-821608 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-821608 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.997223213s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.00s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-821608 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.95s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-823520 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-823520 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.948129565s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.95s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-823520 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-821608 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-821608 --alsologtostderr -v=5: (1.614583259s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-823520 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-823520
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-823520: (1.205889568s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-823520
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-823520: (7.232193882s)
--- PASS: TestMountStart/serial/RestartStopped (8.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-823520 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (96.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-176990 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0915 07:04:32.968068 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:05:56.033903 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-176990 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m36.012839763s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (96.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (16.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-176990 -- rollout status deployment/busybox: (14.17309315s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-c4vmf -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-ds64d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-c4vmf -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-ds64d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-c4vmf -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-ds64d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (16.26s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-c4vmf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-c4vmf -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-ds64d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-176990 -- exec busybox-7dff88458-ds64d -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-176990 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-176990 -v 3 --alsologtostderr: (15.379463271s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.07s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-176990 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.35s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp testdata/cp-test.txt multinode-176990:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2996503126/001/cp-test_multinode-176990.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990:/home/docker/cp-test.txt multinode-176990-m02:/home/docker/cp-test_multinode-176990_multinode-176990-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m02 "sudo cat /home/docker/cp-test_multinode-176990_multinode-176990-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990:/home/docker/cp-test.txt multinode-176990-m03:/home/docker/cp-test_multinode-176990_multinode-176990-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m03 "sudo cat /home/docker/cp-test_multinode-176990_multinode-176990-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp testdata/cp-test.txt multinode-176990-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2996503126/001/cp-test_multinode-176990-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990-m02:/home/docker/cp-test.txt multinode-176990:/home/docker/cp-test_multinode-176990-m02_multinode-176990.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990 "sudo cat /home/docker/cp-test_multinode-176990-m02_multinode-176990.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990-m02:/home/docker/cp-test.txt multinode-176990-m03:/home/docker/cp-test_multinode-176990-m02_multinode-176990-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m03 "sudo cat /home/docker/cp-test_multinode-176990-m02_multinode-176990-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp testdata/cp-test.txt multinode-176990-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2996503126/001/cp-test_multinode-176990-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990-m03:/home/docker/cp-test.txt multinode-176990:/home/docker/cp-test_multinode-176990-m03_multinode-176990.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990 "sudo cat /home/docker/cp-test_multinode-176990-m03_multinode-176990.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 cp multinode-176990-m03:/home/docker/cp-test.txt multinode-176990-m02:/home/docker/cp-test_multinode-176990-m03_multinode-176990-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 ssh -n multinode-176990-m02 "sudo cat /home/docker/cp-test_multinode-176990-m03_multinode-176990-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.16s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-176990 node stop m03: (1.237495773s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-176990 status: exit status 7 (539.60298ms)

                                                
                                                
-- stdout --
	multinode-176990
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-176990-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-176990-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-176990 status --alsologtostderr: exit status 7 (512.70321ms)

                                                
                                                
-- stdout --
	multinode-176990
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-176990-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-176990-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:06:43.799908 3319907 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:06:43.800071 3319907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:06:43.800082 3319907 out.go:358] Setting ErrFile to fd 2...
	I0915 07:06:43.800088 3319907 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:06:43.800319 3319907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 07:06:43.800503 3319907 out.go:352] Setting JSON to false
	I0915 07:06:43.800536 3319907 mustload.go:65] Loading cluster: multinode-176990
	I0915 07:06:43.800636 3319907 notify.go:220] Checking for updates...
	I0915 07:06:43.800976 3319907 config.go:182] Loaded profile config "multinode-176990": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 07:06:43.800990 3319907 status.go:255] checking status of multinode-176990 ...
	I0915 07:06:43.801846 3319907 cli_runner.go:164] Run: docker container inspect multinode-176990 --format={{.State.Status}}
	I0915 07:06:43.821495 3319907 status.go:330] multinode-176990 host status = "Running" (err=<nil>)
	I0915 07:06:43.821524 3319907 host.go:66] Checking if "multinode-176990" exists ...
	I0915 07:06:43.821868 3319907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-176990
	I0915 07:06:43.843944 3319907 host.go:66] Checking if "multinode-176990" exists ...
	I0915 07:06:43.844268 3319907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:06:43.844325 3319907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-176990
	I0915 07:06:43.862620 3319907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36017 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/multinode-176990/id_rsa Username:docker}
	I0915 07:06:43.960243 3319907 ssh_runner.go:195] Run: systemctl --version
	I0915 07:06:43.965205 3319907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:06:43.977342 3319907 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:06:44.036096 3319907 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-15 07:06:44.025802146 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:06:44.036793 3319907 kubeconfig.go:125] found "multinode-176990" server: "https://192.168.67.2:8443"
	I0915 07:06:44.036831 3319907 api_server.go:166] Checking apiserver status ...
	I0915 07:06:44.036880 3319907 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0915 07:06:44.048954 3319907 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1367/cgroup
	I0915 07:06:44.059323 3319907 api_server.go:182] apiserver freezer: "10:freezer:/docker/be155320a2269ae68f510e01fd58e4e04e779b215211feed211412a3e7b52e2a/kubepods/burstable/pod382919805449396e60db82a4a495c2f6/86c924218768bcda03d667a0010291d213f8b4333a47486d5cba5cb7afb5cedd"
	I0915 07:06:44.059405 3319907 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/be155320a2269ae68f510e01fd58e4e04e779b215211feed211412a3e7b52e2a/kubepods/burstable/pod382919805449396e60db82a4a495c2f6/86c924218768bcda03d667a0010291d213f8b4333a47486d5cba5cb7afb5cedd/freezer.state
	I0915 07:06:44.068421 3319907 api_server.go:204] freezer state: "THAWED"
	I0915 07:06:44.068452 3319907 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0915 07:06:44.076430 3319907 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0915 07:06:44.076465 3319907 status.go:422] multinode-176990 apiserver status = Running (err=<nil>)
	I0915 07:06:44.076477 3319907 status.go:257] multinode-176990 status: &{Name:multinode-176990 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:06:44.076523 3319907 status.go:255] checking status of multinode-176990-m02 ...
	I0915 07:06:44.076863 3319907 cli_runner.go:164] Run: docker container inspect multinode-176990-m02 --format={{.State.Status}}
	I0915 07:06:44.095878 3319907 status.go:330] multinode-176990-m02 host status = "Running" (err=<nil>)
	I0915 07:06:44.095916 3319907 host.go:66] Checking if "multinode-176990-m02" exists ...
	I0915 07:06:44.096237 3319907 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-176990-m02
	I0915 07:06:44.113198 3319907 host.go:66] Checking if "multinode-176990-m02" exists ...
	I0915 07:06:44.113512 3319907 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0915 07:06:44.113558 3319907 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-176990-m02
	I0915 07:06:44.131105 3319907 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:36022 SSHKeyPath:/home/jenkins/minikube-integration/19644-3193270/.minikube/machines/multinode-176990-m02/id_rsa Username:docker}
	I0915 07:06:44.224648 3319907 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0915 07:06:44.237736 3319907 status.go:257] multinode-176990-m02 status: &{Name:multinode-176990-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:06:44.237769 3319907 status.go:255] checking status of multinode-176990-m03 ...
	I0915 07:06:44.238090 3319907 cli_runner.go:164] Run: docker container inspect multinode-176990-m03 --format={{.State.Status}}
	I0915 07:06:44.258981 3319907 status.go:330] multinode-176990-m03 host status = "Stopped" (err=<nil>)
	I0915 07:06:44.259054 3319907 status.go:343] host is not running, skipping remaining checks
	I0915 07:06:44.259062 3319907 status.go:257] multinode-176990-m03 status: &{Name:multinode-176990-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.29s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-176990 node start m03 -v=7 --alsologtostderr: (9.450998793s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.23s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (103.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-176990
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-176990
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-176990: (24.992924686s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-176990 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-176990 --wait=true -v=8 --alsologtostderr: (1m17.890237204s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-176990
--- PASS: TestMultiNode/serial/RestartKeepsNodes (103.03s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-176990 node delete m03: (4.904069264s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.58s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 stop
E0915 07:08:49.039623 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-176990 stop: (23.818505486s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-176990 status: exit status 7 (103.896471ms)

                                                
                                                
-- stdout --
	multinode-176990
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-176990-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-176990 status --alsologtostderr: exit status 7 (100.055758ms)

                                                
                                                
-- stdout --
	multinode-176990
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-176990-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:09:07.083049 3328361 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:09:07.083244 3328361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:07.083258 3328361 out.go:358] Setting ErrFile to fd 2...
	I0915 07:09:07.083264 3328361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:09:07.083529 3328361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 07:09:07.083745 3328361 out.go:352] Setting JSON to false
	I0915 07:09:07.083784 3328361 mustload.go:65] Loading cluster: multinode-176990
	I0915 07:09:07.083931 3328361 notify.go:220] Checking for updates...
	I0915 07:09:07.084270 3328361 config.go:182] Loaded profile config "multinode-176990": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 07:09:07.084291 3328361 status.go:255] checking status of multinode-176990 ...
	I0915 07:09:07.085181 3328361 cli_runner.go:164] Run: docker container inspect multinode-176990 --format={{.State.Status}}
	I0915 07:09:07.103885 3328361 status.go:330] multinode-176990 host status = "Stopped" (err=<nil>)
	I0915 07:09:07.103905 3328361 status.go:343] host is not running, skipping remaining checks
	I0915 07:09:07.103913 3328361 status.go:257] multinode-176990 status: &{Name:multinode-176990 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0915 07:09:07.103952 3328361 status.go:255] checking status of multinode-176990-m02 ...
	I0915 07:09:07.104283 3328361 cli_runner.go:164] Run: docker container inspect multinode-176990-m02 --format={{.State.Status}}
	I0915 07:09:07.132049 3328361 status.go:330] multinode-176990-m02 host status = "Stopped" (err=<nil>)
	I0915 07:09:07.132070 3328361 status.go:343] host is not running, skipping remaining checks
	I0915 07:09:07.132077 3328361 status.go:257] multinode-176990-m02 status: &{Name:multinode-176990-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-176990 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0915 07:09:32.967687 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-176990 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (56.354922744s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-176990 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.03s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-176990
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-176990-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-176990-m02 --driver=docker  --container-runtime=containerd: exit status 14 (89.540621ms)

                                                
                                                
-- stdout --
	* [multinode-176990-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-176990-m02' is duplicated with machine name 'multinode-176990-m02' in profile 'multinode-176990'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-176990-m03 --driver=docker  --container-runtime=containerd
E0915 07:10:12.108361 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-176990-m03 --driver=docker  --container-runtime=containerd: (32.077354377s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-176990
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-176990: exit status 80 (334.287603ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-176990 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-176990-m03 already exists in multinode-176990-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-176990-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-176990-m03: (1.964690119s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.52s)

                                                
                                    
x
+
TestPreload (112.26s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-699245 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-699245 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m15.448687034s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-699245 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-699245 image pull gcr.io/k8s-minikube/busybox: (2.004952473s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-699245
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-699245: (12.296823743s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-699245 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-699245 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (19.690427087s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-699245 image list
helpers_test.go:175: Cleaning up "test-preload-699245" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-699245
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-699245: (2.521904391s)
--- PASS: TestPreload (112.26s)

                                                
                                    
x
+
TestScheduledStopUnix (110.7s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-678604 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-678604 --memory=2048 --driver=docker  --container-runtime=containerd: (34.312534575s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-678604 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-678604 -n scheduled-stop-678604
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-678604 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-678604 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-678604 -n scheduled-stop-678604
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-678604
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-678604 --schedule 15s
E0915 07:13:49.039134 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-678604
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-678604: exit status 7 (65.381314ms)

                                                
                                                
-- stdout --
	scheduled-stop-678604
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-678604 -n scheduled-stop-678604
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-678604 -n scheduled-stop-678604: exit status 7 (70.653862ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-678604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-678604
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-678604: (4.822444679s)
--- PASS: TestScheduledStopUnix (110.70s)

                                                
                                    
x
+
TestInsufficientStorage (13.7s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-482766 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
E0915 07:14:32.967471 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-482766 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (11.2142563s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fb1c1404-d612-4396-8940-b331aa413e46","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-482766] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"413b3cc3-5885-4301-a68c-f8fd3fbd7a00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19644"}}
	{"specversion":"1.0","id":"52cfa087-b72c-4828-a670-7333690966bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"df57e30e-6de2-4f43-87fc-c791b8719600","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig"}}
	{"specversion":"1.0","id":"9974d8ee-a30e-430d-88fb-a4f8c88148a7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube"}}
	{"specversion":"1.0","id":"2e82da31-0baa-40e6-8207-846a9bf88c83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"973368a2-d7ae-4e51-b850-8e2cdcb73b53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"035ad9a7-4606-42a5-bc42-00efaa841651","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3e3cc971-60fd-4fd5-b191-e7464c5569e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"1005a624-10af-4c01-aeae-7e1e8a6c26cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"636bf63d-5398-412e-bb0f-a33d8331239b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"200fcce3-48bb-42ed-8da8-6a7e69a5727d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-482766\" primary control-plane node in \"insufficient-storage-482766\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1fd0969e-03df-4716-912f-ef81ffe1983b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726358845-19644 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"d1c82221-28dc-4eb7-aec6-0335650627e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"135dd79a-10b5-465c-9598-96448a000c4f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-482766 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-482766 --output=json --layout=cluster: exit status 7 (289.617016ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-482766","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-482766","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:14:37.093686 3346944 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-482766" does not appear in /home/jenkins/minikube-integration/19644-3193270/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-482766 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-482766 --output=json --layout=cluster: exit status 7 (283.865129ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-482766","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-482766","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0915 07:14:37.383049 3347006 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-482766" does not appear in /home/jenkins/minikube-integration/19644-3193270/kubeconfig
	E0915 07:14:37.393336 3347006 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/insufficient-storage-482766/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-482766" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-482766
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-482766: (1.908845031s)
--- PASS: TestInsufficientStorage (13.70s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (82.18s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3918909067 start -p running-upgrade-629987 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3918909067 start -p running-upgrade-629987 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (34.267171812s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-629987 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-629987 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (43.442008656s)
helpers_test.go:175: Cleaning up "running-upgrade-629987" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-629987
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-629987: (3.266870412s)
--- PASS: TestRunningBinaryUpgrade (82.18s)

                                                
                                    
x
+
TestKubernetesUpgrade (353.59s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-419312 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-419312 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (59.347924582s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-419312
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-419312: (1.225328044s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-419312 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-419312 status --format={{.Host}}: exit status 7 (251.194179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-419312 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-419312 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m42.133906654s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-419312 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-419312 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-419312 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (229.192985ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-419312] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-419312
	    minikube start -p kubernetes-upgrade-419312 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4193122 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-419312 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-419312 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-419312 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.397454978s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-419312" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-419312
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-419312: (2.807355391s)
--- PASS: TestKubernetesUpgrade (353.59s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.74s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2656589795 start -p missing-upgrade-430788 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2656589795 start -p missing-upgrade-430788 --memory=2200 --driver=docker  --container-runtime=containerd: (1m38.220264662s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-430788
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-430788: (10.311644388s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-430788
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-430788 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0915 07:18:49.039484 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-430788 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m5.014500743s)
helpers_test.go:175: Cleaning up "missing-upgrade-430788" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-430788
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-430788: (2.437123631s)
--- PASS: TestMissingContainerUpgrade (177.74s)

                                                
                                    
x
+
TestPause/serial/Start (61.96s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-884526 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-884526 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m1.957816895s)
--- PASS: TestPause/serial/Start (61.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-235533 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-235533 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (113.672137ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-235533] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-235533 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-235533 --driver=docker  --container-runtime=containerd: (40.91723172s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-235533 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-235533 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-235533 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.432754102s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-235533 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-235533 status -o json: exit status 2 (315.199126ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-235533","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-235533
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-235533: (1.968622807s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-235533 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-235533 --no-kubernetes --driver=docker  --container-runtime=containerd: (9.088628886s)
--- PASS: TestNoKubernetes/serial/Start (9.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.02s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-884526 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-884526 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.993817024s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-235533 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-235533 "sudo systemctl is-active --quiet service kubelet": exit status 1 (304.326896ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.11s)

                                                
                                    
x
+
TestPause/serial/Pause (0.95s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-884526 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-235533
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-235533: (1.289217106s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.67s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-884526 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-884526 --output=json --layout=cluster: exit status 2 (671.44771ms)

                                                
                                                
-- stdout --
	{"Name":"pause-884526","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-884526","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.67s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-884526 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-235533 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-235533 --driver=docker  --container-runtime=containerd: (7.119575525s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.12s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.27s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-884526 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-884526 --alsologtostderr -v=5: (1.27044845s)
--- PASS: TestPause/serial/PauseAgain (1.27s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.65s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-884526 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-884526 --alsologtostderr -v=5: (2.65045426s)
--- PASS: TestPause/serial/DeletePaused (2.65s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-884526
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-884526: exit status 1 (17.321433ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-884526: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-235533 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-235533 "sudo systemctl is-active --quiet service kubelet": exit status 1 (259.715662ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.26s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.92s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (77.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.392559405 start -p stopped-upgrade-360910 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.392559405 start -p stopped-upgrade-360910 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (37.697469551s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.392559405 -p stopped-upgrade-360910 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.392559405 -p stopped-upgrade-360910 stop: (1.260084771s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-360910 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0915 07:19:32.967902 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-360910 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (38.608466904s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (77.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-360910
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-360910: (1.085246042s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-474433 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-474433 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (235.100441ms)

                                                
                                                
-- stdout --
	* [false-474433] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19644
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0915 07:21:57.682803 3387649 out.go:345] Setting OutFile to fd 1 ...
	I0915 07:21:57.683041 3387649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:21:57.683072 3387649 out.go:358] Setting ErrFile to fd 2...
	I0915 07:21:57.683093 3387649 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0915 07:21:57.683354 3387649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19644-3193270/.minikube/bin
	I0915 07:21:57.683811 3387649 out.go:352] Setting JSON to false
	I0915 07:21:57.684816 3387649 start.go:129] hostinfo: {"hostname":"ip-172-31-29-130","uptime":54269,"bootTime":1726330649,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"36adf542-ef4f-4e2d-a0c8-6868d1383ff9"}
	I0915 07:21:57.684918 3387649 start.go:139] virtualization:  
	I0915 07:21:57.690151 3387649 out.go:177] * [false-474433] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0915 07:21:57.692940 3387649 out.go:177]   - MINIKUBE_LOCATION=19644
	I0915 07:21:57.693009 3387649 notify.go:220] Checking for updates...
	I0915 07:21:57.698646 3387649 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0915 07:21:57.701255 3387649 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19644-3193270/kubeconfig
	I0915 07:21:57.703778 3387649 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19644-3193270/.minikube
	I0915 07:21:57.706374 3387649 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0915 07:21:57.709076 3387649 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0915 07:21:57.712340 3387649 config.go:182] Loaded profile config "force-systemd-flag-909328": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0915 07:21:57.712496 3387649 driver.go:394] Setting default libvirt URI to qemu:///system
	I0915 07:21:57.744390 3387649 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0915 07:21:57.744772 3387649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0915 07:21:57.834262 3387649 info.go:266] docker info: {ID:U5VK:ZNT5:35M3:FHLW:Q7TL:ELFX:BNAG:AV4T:UD2H:SK5L:SEJV:SJJL Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-15 07:21:57.824636663 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-29-130 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0915 07:21:57.834368 3387649 docker.go:318] overlay module found
	I0915 07:21:57.837225 3387649 out.go:177] * Using the docker driver based on user configuration
	I0915 07:21:57.839715 3387649 start.go:297] selected driver: docker
	I0915 07:21:57.839732 3387649 start.go:901] validating driver "docker" against <nil>
	I0915 07:21:57.839759 3387649 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0915 07:21:57.843112 3387649 out.go:201] 
	W0915 07:21:57.845854 3387649 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0915 07:21:57.848418 3387649 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-474433 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-474433" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-474433

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-474433"

                                                
                                                
----------------------- debugLogs end: false-474433 [took: 4.317886622s] --------------------------------
helpers_test.go:175: Cleaning up "false-474433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-474433
--- PASS: TestNetworkPlugins/group/false (4.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-290288 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0915 07:23:49.044276 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:24:32.967352 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-290288 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m27.358702288s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-290288 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7f0c5be7-5661-4972-9368-5ea1b0b20f0a] Pending
helpers_test.go:344: "busybox" [7f0c5be7-5661-4972-9368-5ea1b0b20f0a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7f0c5be7-5661-4972-9368-5ea1b0b20f0a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004741199s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-290288 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (78.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-626408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-626408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m18.357378784s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (78.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-290288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-290288 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11422798s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-290288 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-290288 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-290288 --alsologtostderr -v=3: (13.895628352s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-290288 -n old-k8s-version-290288
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-290288 -n old-k8s-version-290288: exit status 7 (85.714742ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-290288 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (149.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-290288 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0915 07:26:52.110064 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-290288 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m29.053530113s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-290288 -n old-k8s-version-290288
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (149.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-626408 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aba2c4ea-469c-4c1f-9c9b-bba5cb0d4a72] Pending
helpers_test.go:344: "busybox" [aba2c4ea-469c-4c1f-9c9b-bba5cb0d4a72] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aba2c4ea-469c-4c1f-9c9b-bba5cb0d4a72] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004320791s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-626408 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-626408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-626408 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.191674282s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-626408 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-626408 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-626408 --alsologtostderr -v=3: (12.107182811s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-626408 -n no-preload-626408
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-626408 -n no-preload-626408: exit status 7 (90.44129ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-626408 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (267.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-626408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0915 07:28:49.039855 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-626408 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m26.700141715s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-626408 -n no-preload-626408
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (267.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nvb9t" [4fdd1334-38ed-4f3c-bba5-951dba9f35e5] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005045938s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nvb9t" [4fdd1334-38ed-4f3c-bba5-951dba9f35e5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003444711s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-290288 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-290288 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-290288 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-290288 -n old-k8s-version-290288
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-290288 -n old-k8s-version-290288: exit status 2 (360.172195ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-290288 -n old-k8s-version-290288
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-290288 -n old-k8s-version-290288: exit status 2 (322.908947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-290288 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-290288 -n old-k8s-version-290288
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-290288 -n old-k8s-version-290288
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (80.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-766484 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0915 07:29:32.967638 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-766484 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m20.424768525s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (80.42s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-766484 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d50cd262-98e6-460a-a489-220af391d3c2] Pending
helpers_test.go:344: "busybox" [d50cd262-98e6-460a-a489-220af391d3c2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d50cd262-98e6-460a-a489-220af391d3c2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003799588s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-766484 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-766484 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-766484 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.123368221s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-766484 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-766484 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-766484 --alsologtostderr -v=3: (12.040726274s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-766484 -n embed-certs-766484
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-766484 -n embed-certs-766484: exit status 7 (72.942453ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-766484 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (288.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-766484 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0915 07:30:55.852542 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:55.858962 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:55.870366 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:55.891770 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:55.933315 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:56.014691 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:56.176830 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:56.499040 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:57.140915 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:30:58.422090 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:31:00.984204 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:31:06.106395 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:31:16.348089 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:31:36.829453 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-766484 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m48.110007324s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-766484 -n embed-certs-766484
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (288.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g2xm2" [d18e997e-5afa-41db-9150-c1afa4945e23] Running
E0915 07:32:17.790986 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004324893s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-g2xm2" [d18e997e-5afa-41db-9150-c1afa4945e23] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003574089s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-626408 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-626408 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-626408 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-626408 -n no-preload-626408
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-626408 -n no-preload-626408: exit status 2 (339.951795ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-626408 -n no-preload-626408
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-626408 -n no-preload-626408: exit status 2 (356.727708ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-626408 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-626408 -n no-preload-626408
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-626408 -n no-preload-626408
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-188469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-188469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (49.734718838s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (49.73s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-188469 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4d5a344f-a8a2-436d-a8d5-8882dde7512e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4d5a344f-a8a2-436d-a8d5-8882dde7512e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003550506s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-188469 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-188469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-188469 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.080378114s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-188469 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-188469 --alsologtostderr -v=3
E0915 07:33:39.712958 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-188469 --alsologtostderr -v=3: (12.041670437s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469: exit status 7 (67.843349ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-188469 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-188469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0915 07:33:49.039549 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:34:32.967496 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-188469 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.348264208s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (267.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vm7nn" [b0cedecb-c01d-4478-816d-e0f978645069] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004308279s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-vm7nn" [b0cedecb-c01d-4478-816d-e0f978645069] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004107873s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-766484 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-766484 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-766484 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-766484 -n embed-certs-766484
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-766484 -n embed-certs-766484: exit status 2 (321.548527ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-766484 -n embed-certs-766484
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-766484 -n embed-certs-766484: exit status 2 (321.526554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-766484 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-766484 -n embed-certs-766484
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-766484 -n embed-certs-766484
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (40.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-851920 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0915 07:35:55.850556 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:36:23.554768 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-851920 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (40.118317056s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (40.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-851920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-851920 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.19371857s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-851920 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-851920 --alsologtostderr -v=3: (1.238043438s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-851920 -n newest-cni-851920
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-851920 -n newest-cni-851920: exit status 7 (79.991267ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-851920 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-851920 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-851920 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (15.554608765s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-851920 -n newest-cni-851920
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (16.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-851920 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-851920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-851920 -n newest-cni-851920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-851920 -n newest-cni-851920: exit status 2 (341.505311ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-851920 -n newest-cni-851920
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-851920 -n newest-cni-851920: exit status 2 (360.319491ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-851920 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-851920 -n newest-cni-851920
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-851920 -n newest-cni-851920
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (52.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E0915 07:37:21.347210 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:21.354882 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:21.366232 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:21.387607 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:21.428909 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:21.510322 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:21.671686 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:21.993305 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:22.635424 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:23.916742 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:26.478674 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:31.600653 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:37:41.842918 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (52.871889279s)
--- PASS: TestNetworkPlugins/group/auto/Start (52.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-474433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-474433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-bk8qp" [381a4344-deec-4f9f-969f-96308b674278] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-bk8qp" [381a4344-deec-4f9f-969f-96308b674278] Running
E0915 07:38:02.324409 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003602052s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-474433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-shg8k" [b62a5df5-7e36-477e-b664-f45c00f2f198] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005141713s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-shg8k" [b62a5df5-7e36-477e-b664-f45c00f2f198] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004744571s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-188469 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-188469 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-188469 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-188469 --alsologtostderr -v=1: (1.062391927s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469: exit status 2 (422.269797ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469: exit status 2 (403.24507ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-188469 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-188469 -n default-k8s-diff-port-188469
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.37s)
E0915 07:43:25.085514 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/default-k8s-diff-port-188469/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:43:30.207123 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/default-k8s-diff-port-188469/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:43:32.111395 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:43:34.980970 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:43:40.448470 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/default-k8s-diff-port-188469/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:43:49.038942 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m28.15512293s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (69.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E0915 07:38:43.285905 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:38:49.039626 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/functional-840758/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:39:16.037790 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:39:32.967765 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/addons-686490/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m9.060210671s)
--- PASS: TestNetworkPlugins/group/calico/Start (69.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-wglnf" [8dc84631-69aa-4ab5-affa-deb4490a8620] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003780268s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-474433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-474433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p7v4v" [acad55bb-7f60-4a77-8946-db95540ef9fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p7v4v" [acad55bb-7f60-4a77-8946-db95540ef9fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003920925s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-474433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dc6zj" [37fb3cc8-21dd-484d-b1b2-49b9ab817b19] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003902768s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-474433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-474433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-75zt7" [2788d1cb-6421-4253-8ef7-796e7fcd25f6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 07:40:05.207606 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-75zt7" [2788d1cb-6421-4253-8ef7-796e7fcd25f6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005257549s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-474433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (59.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (59.09188538s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (59.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0915 07:40:55.850493 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/old-k8s-version-290288/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m16.677677092s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-474433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-474433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-nqsmp" [a6a36168-b195-46df-b3b4-0ba59deee1b2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-nqsmp" [a6a36168-b195-46df-b3b4-0ba59deee1b2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004460873s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-474433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (53.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (53.590821594s)
--- PASS: TestNetworkPlugins/group/flannel/Start (53.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-474433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-474433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-6wgmp" [e563bd81-049a-46fc-8e97-ca30f0805c83] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-6wgmp" [e563bd81-049a-46fc-8e97-ca30f0805c83] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004640813s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-474433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (78.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-474433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m18.67441236s)
--- PASS: TestNetworkPlugins/group/bridge/Start (78.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-pm4rg" [a4512113-8a53-4c3d-b130-9cb7de64f8c2] Running
E0915 07:42:49.050068 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/no-preload-626408/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004494106s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-474433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-474433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2m6kn" [087d1c17-4791-44f9-a198-621b54f8f89b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0915 07:42:54.001159 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:54.007963 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:54.019724 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:54.042559 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2m6kn" [087d1c17-4791-44f9-a198-621b54f8f89b] Running
E0915 07:42:54.084423 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:54.166291 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:54.327856 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:54.649339 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:55.291380 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:56.573066 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
E0915 07:42:59.134683 3198652 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19644-3193270/.minikube/profiles/auto-474433/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005526525s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-474433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-474433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-474433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-ptkm7" [74961eb6-6527-4339-85d3-862f3ffe5226] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-ptkm7" [74961eb6-6527-4339-85d3-862f3ffe5226] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.011179664s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-474433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-474433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.59s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-343356 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-343356" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-343356
--- SKIP: TestDownloadOnlyKic (0.59s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-364912" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-364912
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-474433 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-474433" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-474433

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-474433"

                                                
                                                
----------------------- debugLogs end: kubenet-474433 [took: 4.429032469s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-474433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-474433
--- SKIP: TestNetworkPlugins/group/kubenet (4.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-474433 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-474433" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-474433

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-474433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-474433"

                                                
                                                
----------------------- debugLogs end: cilium-474433 [took: 5.730510508s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-474433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-474433
--- SKIP: TestNetworkPlugins/group/cilium (5.96s)

                                                
                                    
Copied to clipboard