Test Report: Docker_Linux_containerd_arm64 19662

                    
                      3f64d3c641e64b460ff7a3cff080aebef74ca5ca:2024-09-17:36258
                    
                

Test fail (1/328)

Order failed test Duration
29 TestAddons/serial/Volcano 200.24
x
+
TestAddons/serial/Volcano (200.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:905: volcano-admission stabilized in 60.430407ms
addons_test.go:913: volcano-controller stabilized in 61.713379ms
addons_test.go:897: volcano-scheduler stabilized in 62.043453ms
addons_test.go:919: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-576bc46687-j2xtl" [40f03a98-3cba-4e17-8104-c4a7bde68000] Running
addons_test.go:919: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003680445s
addons_test.go:923: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-77d7d48b68-k4gxk" [f796d81e-d65a-4f09-b5b1-902ef3c32b29] Running
addons_test.go:923: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.004403302s
addons_test.go:927: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-56675bb4d5-df26f" [47e89786-255a-4de4-a116-7c29c4b9619f] Running
addons_test.go:927: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.018487536s
addons_test.go:932: (dbg) Run:  kubectl --context addons-029117 delete -n volcano-system job volcano-admission-init
addons_test.go:938: (dbg) Run:  kubectl --context addons-029117 create -f testdata/vcjob.yaml
addons_test.go:946: (dbg) Run:  kubectl --context addons-029117 get vcjob -n my-volcano
addons_test.go:964: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2e7a8930-c5f8-451c-b609-4e6ad53fd698] Pending
helpers_test.go:344: "test-job-nginx-0" [2e7a8930-c5f8-451c-b609-4e6ad53fd698] Pending: PodScheduled:Unschedulable (0/1 nodes are unavailable: 1 Insufficient cpu.)
helpers_test.go:329: TestAddons/serial/Volcano: WARNING: pod list for "my-volcano" "volcano.sh/job-name=test-job" returned: client rate limiter Wait returned an error: context deadline exceeded
addons_test.go:964: ***** TestAddons/serial/Volcano: pod "volcano.sh/job-name=test-job" failed to start within 3m0s: context deadline exceeded ****
addons_test.go:964: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-029117 -n addons-029117
addons_test.go:964: TestAddons/serial/Volcano: showing logs for failed pods as of 2024-09-17 17:49:00.880945795 +0000 UTC m=+442.598590202
addons_test.go:964: (dbg) Run:  kubectl --context addons-029117 describe po test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-029117 describe po test-job-nginx-0 -n my-volcano:
Name:             test-job-nginx-0
Namespace:        my-volcano
Priority:         0
Service Account:  default
Node:             <none>
Labels:           volcano.sh/job-name=test-job
volcano.sh/job-namespace=my-volcano
volcano.sh/queue-name=test
volcano.sh/task-index=0
volcano.sh/task-spec=nginx
Annotations:      scheduling.k8s.io/group-name: test-job-8a90dbf2-8847-464f-b605-5d1703195e4b
volcano.sh/job-name: test-job
volcano.sh/job-version: 0
volcano.sh/queue-name: test
volcano.sh/task-index: 0
volcano.sh/task-spec: nginx
volcano.sh/template-uid: test-job-nginx
Status:           Pending
IP:               
IPs:              <none>
Controlled By:    Job/test-job
Containers:
nginx:
Image:      nginx:latest
Port:       <none>
Host Port:  <none>
Command:
sleep
10m
Limits:
cpu:  1
Requests:
cpu:  1
Environment:
GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
PROJECT_ID:                      this_is_fake
GCP_PROJECT:                     this_is_fake
GCLOUD_PROJECT:                  this_is_fake
GOOGLE_CLOUD_PROJECT:            this_is_fake
CLOUDSDK_CORE_PROJECT:           this_is_fake
Mounts:
/google-app-creds.json from gcp-creds (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-bl4f9 (ro)
Conditions:
Type           Status
PodScheduled   False 
Volumes:
kube-api-access-bl4f9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
gcp-creds:
Type:          HostPath (bare host directory volume)
Path:          /var/lib/minikube/google_application_credentials.json
HostPathType:  File
QoS Class:         Burstable
Node-Selectors:    <none>
Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason            Age    From     Message
----     ------            ----   ----     -------
Warning  FailedScheduling  2m58s  volcano  0/1 nodes are unavailable: 1 Insufficient cpu.
addons_test.go:964: (dbg) Run:  kubectl --context addons-029117 logs test-job-nginx-0 -n my-volcano
addons_test.go:964: (dbg) kubectl --context addons-029117 logs test-job-nginx-0 -n my-volcano:
addons_test.go:965: failed waiting for test-local-path pod: volcano.sh/job-name=test-job within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/serial/Volcano]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-029117
helpers_test.go:235: (dbg) docker inspect addons-029117:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d0f2450569f3c05d9547834310619cb881977a89679b2c6459b3d0c1ffbccfaa",
	        "Created": "2024-09-17T17:42:24.937349132Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300500,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-17T17:42:25.119310002Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:f8be4f9f9351784955e36c0e64d55ad19451839d9f6d0c057285eb8f9072963b",
	        "ResolvConfPath": "/var/lib/docker/containers/d0f2450569f3c05d9547834310619cb881977a89679b2c6459b3d0c1ffbccfaa/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d0f2450569f3c05d9547834310619cb881977a89679b2c6459b3d0c1ffbccfaa/hostname",
	        "HostsPath": "/var/lib/docker/containers/d0f2450569f3c05d9547834310619cb881977a89679b2c6459b3d0c1ffbccfaa/hosts",
	        "LogPath": "/var/lib/docker/containers/d0f2450569f3c05d9547834310619cb881977a89679b2c6459b3d0c1ffbccfaa/d0f2450569f3c05d9547834310619cb881977a89679b2c6459b3d0c1ffbccfaa-json.log",
	        "Name": "/addons-029117",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-029117:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-029117",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/26dc12362fecc740e7c016e06d1b9f9b5ca9f1d488dbbcc1477e996dcb26109c-init/diff:/var/lib/docker/overlay2/1643aa9f55c7da3087f90f47f8b4956b1002c891378d0c9a7d45bff5eec3d7f3/diff",
	                "MergedDir": "/var/lib/docker/overlay2/26dc12362fecc740e7c016e06d1b9f9b5ca9f1d488dbbcc1477e996dcb26109c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/26dc12362fecc740e7c016e06d1b9f9b5ca9f1d488dbbcc1477e996dcb26109c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/26dc12362fecc740e7c016e06d1b9f9b5ca9f1d488dbbcc1477e996dcb26109c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-029117",
	                "Source": "/var/lib/docker/volumes/addons-029117/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-029117",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-029117",
	                "name.minikube.sigs.k8s.io": "addons-029117",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "196b9ba115e4644680541ac5bbbda56a59ce68261800396469dcfc180ecc6aba",
	            "SandboxKey": "/var/run/docker/netns/196b9ba115e4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33139"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33142"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33140"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33141"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-029117": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "3e6002993b425f88d9c0848cd15f58c79e29d63abecd3308618a91e3498eb082",
	                    "EndpointID": "67c4089d1b548528552e2e98e1fddc5e57bcb791eb884daa6acc76c788ca53c5",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-029117",
	                        "d0f2450569f3"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-029117 -n addons-029117
helpers_test.go:244: <<< TestAddons/serial/Volcano FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/serial/Volcano]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-029117 logs -n 25: (1.610757341s)
helpers_test.go:252: TestAddons/serial/Volcano logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only              | download-only-114122   | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC |                     |
	|         | -p download-only-114122              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| delete  | -p download-only-114122              | download-only-114122   | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| start   | -o=json --download-only              | download-only-798377   | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC |                     |
	|         | -p download-only-798377              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1         |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| delete  | -p download-only-798377              | download-only-798377   | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| delete  | -p download-only-114122              | download-only-114122   | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| delete  | -p download-only-798377              | download-only-798377   | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| start   | --download-only -p                   | download-docker-569909 | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC |                     |
	|         | download-docker-569909               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p download-docker-569909            | download-docker-569909 | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| start   | --download-only -p                   | binary-mirror-188320   | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC |                     |
	|         | binary-mirror-188320                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34225               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-188320              | binary-mirror-188320   | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:42 UTC |
	| addons  | enable dashboard -p                  | addons-029117          | jenkins | v1.34.0 | 17 Sep 24 17:42 UTC |                     |
	|         | addons-029117                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-029117          | jenkins | v1.34.0 | 17 Sep 24 17:42 UTC |                     |
	|         | addons-029117                        |                        |         |         |                     |                     |
	| start   | -p addons-029117 --wait=true         | addons-029117          | jenkins | v1.34.0 | 17 Sep 24 17:42 UTC | 17 Sep 24 17:45 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=containerd       |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:42:00
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:42:00.542664  300014 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:42:00.542920  300014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:42:00.542935  300014 out.go:358] Setting ErrFile to fd 2...
	I0917 17:42:00.542942  300014 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:42:00.543245  300014 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 17:42:00.544271  300014 out.go:352] Setting JSON to false
	I0917 17:42:00.545439  300014 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5066,"bootTime":1726589854,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0917 17:42:00.545527  300014 start.go:139] virtualization:  
	I0917 17:42:00.548992  300014 out.go:177] * [addons-029117] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0917 17:42:00.552269  300014 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:42:00.552317  300014 notify.go:220] Checking for updates...
	I0917 17:42:00.556908  300014 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:42:00.559338  300014 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	I0917 17:42:00.561641  300014 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	I0917 17:42:00.564001  300014 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 17:42:00.566222  300014 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:42:00.569131  300014 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:42:00.599752  300014 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:42:00.599879  300014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:42:00.663761  300014 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 17:42:00.654161583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:42:00.663887  300014 docker.go:318] overlay module found
	I0917 17:42:00.666150  300014 out.go:177] * Using the docker driver based on user configuration
	I0917 17:42:00.667719  300014 start.go:297] selected driver: docker
	I0917 17:42:00.667743  300014 start.go:901] validating driver "docker" against <nil>
	I0917 17:42:00.667760  300014 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:42:00.668400  300014 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:42:00.720731  300014 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 17:42:00.711603382 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:42:00.720965  300014 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 17:42:00.721197  300014 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:42:00.723213  300014 out.go:177] * Using Docker driver with root privileges
	I0917 17:42:00.724940  300014 cni.go:84] Creating CNI manager for ""
	I0917 17:42:00.725023  300014 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 17:42:00.725040  300014 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 17:42:00.725150  300014 start.go:340] cluster config:
	{Name:addons-029117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-029117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHA
uthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:42:00.728269  300014 out.go:177] * Starting "addons-029117" primary control-plane node in "addons-029117" cluster
	I0917 17:42:00.730156  300014 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0917 17:42:00.731969  300014 out.go:177] * Pulling base image v0.0.45-1726589491-19662 ...
	I0917 17:42:00.733669  300014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0917 17:42:00.733733  300014 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0917 17:42:00.733746  300014 cache.go:56] Caching tarball of preloaded images
	I0917 17:42:00.733755  300014 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 17:42:00.733826  300014 preload.go:172] Found /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I0917 17:42:00.733836  300014 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on containerd
	I0917 17:42:00.734199  300014 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/config.json ...
	I0917 17:42:00.734222  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/config.json: {Name:mk49d163f4326c5c858f309e4fe329382e8297b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:00.749070  300014 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 17:42:00.749195  300014 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 17:42:00.749219  300014 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 17:42:00.749225  300014 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 17:42:00.749233  300014 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 17:42:00.749241  300014 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from local cache
	I0917 17:42:17.991735  300014 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 from cached tarball
	I0917 17:42:17.991780  300014 cache.go:194] Successfully downloaded all kic artifacts
	I0917 17:42:17.991822  300014 start.go:360] acquireMachinesLock for addons-029117: {Name:mkbcd49a5935eb7e7ddfc3126fc2fdcea2d6a16d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0917 17:42:17.991955  300014 start.go:364] duration metric: took 109.407µs to acquireMachinesLock for "addons-029117"
	I0917 17:42:17.991985  300014 start.go:93] Provisioning new machine with config: &{Name:addons-029117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-029117 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fa
lse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 17:42:17.992066  300014 start.go:125] createHost starting for "" (driver="docker")
	I0917 17:42:17.994677  300014 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0917 17:42:17.994943  300014 start.go:159] libmachine.API.Create for "addons-029117" (driver="docker")
	I0917 17:42:17.994978  300014 client.go:168] LocalClient.Create starting
	I0917 17:42:17.995117  300014 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca.pem
	I0917 17:42:18.154164  300014 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/cert.pem
	I0917 17:42:18.504891  300014 cli_runner.go:164] Run: docker network inspect addons-029117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0917 17:42:18.521367  300014 cli_runner.go:211] docker network inspect addons-029117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0917 17:42:18.521481  300014 network_create.go:284] running [docker network inspect addons-029117] to gather additional debugging logs...
	I0917 17:42:18.521503  300014 cli_runner.go:164] Run: docker network inspect addons-029117
	W0917 17:42:18.537218  300014 cli_runner.go:211] docker network inspect addons-029117 returned with exit code 1
	I0917 17:42:18.537253  300014 network_create.go:287] error running [docker network inspect addons-029117]: docker network inspect addons-029117: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-029117 not found
	I0917 17:42:18.537268  300014 network_create.go:289] output of [docker network inspect addons-029117]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-029117 not found
	
	** /stderr **
	I0917 17:42:18.537388  300014 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 17:42:18.552846  300014 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019ca520}
	I0917 17:42:18.552889  300014 network_create.go:124] attempt to create docker network addons-029117 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0917 17:42:18.552947  300014 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-029117 addons-029117
	I0917 17:42:18.628961  300014 network_create.go:108] docker network addons-029117 192.168.49.0/24 created
	I0917 17:42:18.629002  300014 kic.go:121] calculated static IP "192.168.49.2" for the "addons-029117" container
	I0917 17:42:18.629099  300014 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0917 17:42:18.649614  300014 cli_runner.go:164] Run: docker volume create addons-029117 --label name.minikube.sigs.k8s.io=addons-029117 --label created_by.minikube.sigs.k8s.io=true
	I0917 17:42:18.666065  300014 oci.go:103] Successfully created a docker volume addons-029117
	I0917 17:42:18.666166  300014 cli_runner.go:164] Run: docker run --rm --name addons-029117-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-029117 --entrypoint /usr/bin/test -v addons-029117:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib
	I0917 17:42:20.754897  300014 cli_runner.go:217] Completed: docker run --rm --name addons-029117-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-029117 --entrypoint /usr/bin/test -v addons-029117:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -d /var/lib: (2.088682346s)
	I0917 17:42:20.754938  300014 oci.go:107] Successfully prepared a docker volume addons-029117
	I0917 17:42:20.754959  300014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0917 17:42:20.754978  300014 kic.go:194] Starting extracting preloaded images to volume ...
	I0917 17:42:20.755062  300014 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-029117:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir
	I0917 17:42:24.870586  300014 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v addons-029117:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 -I lz4 -xf /preloaded.tar -C /extractDir: (4.11547791s)
	I0917 17:42:24.870624  300014 kic.go:203] duration metric: took 4.115642792s to extract preloaded images to volume ...
	W0917 17:42:24.870768  300014 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0917 17:42:24.870891  300014 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0917 17:42:24.922576  300014 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-029117 --name addons-029117 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-029117 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-029117 --network addons-029117 --ip 192.168.49.2 --volume addons-029117:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4
	I0917 17:42:25.273388  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Running}}
	I0917 17:42:25.291578  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:25.317979  300014 cli_runner.go:164] Run: docker exec addons-029117 stat /var/lib/dpkg/alternatives/iptables
	I0917 17:42:25.391489  300014 oci.go:144] the created container "addons-029117" has a running status.
	I0917 17:42:25.391517  300014 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa...
	I0917 17:42:25.647242  300014 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0917 17:42:25.675235  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:25.700159  300014 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0917 17:42:25.700177  300014 kic_runner.go:114] Args: [docker exec --privileged addons-029117 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0917 17:42:25.792780  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:25.831316  300014 machine.go:93] provisionDockerMachine start ...
	I0917 17:42:25.831415  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:25.853705  300014 main.go:141] libmachine: Using SSH client type: native
	I0917 17:42:25.853996  300014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 17:42:25.854013  300014 main.go:141] libmachine: About to run SSH command:
	hostname
	I0917 17:42:25.854627  300014 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40816->127.0.0.1:33138: read: connection reset by peer
	I0917 17:42:28.999014  300014 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-029117
	
	I0917 17:42:28.999039  300014 ubuntu.go:169] provisioning hostname "addons-029117"
	I0917 17:42:28.999106  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:29.020449  300014 main.go:141] libmachine: Using SSH client type: native
	I0917 17:42:29.020700  300014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 17:42:29.020711  300014 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-029117 && echo "addons-029117" | sudo tee /etc/hostname
	I0917 17:42:29.175793  300014 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-029117
	
	I0917 17:42:29.175874  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:29.192853  300014 main.go:141] libmachine: Using SSH client type: native
	I0917 17:42:29.193100  300014 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x41abe0] 0x41d420 <nil>  [] 0s} 127.0.0.1 33138 <nil> <nil>}
	I0917 17:42:29.193123  300014 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-029117' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-029117/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-029117' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0917 17:42:29.335666  300014 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0917 17:42:29.335758  300014 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19662-293874/.minikube CaCertPath:/home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19662-293874/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19662-293874/.minikube}
	I0917 17:42:29.335812  300014 ubuntu.go:177] setting up certificates
	I0917 17:42:29.335846  300014 provision.go:84] configureAuth start
	I0917 17:42:29.335952  300014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-029117
	I0917 17:42:29.352262  300014 provision.go:143] copyHostCerts
	I0917 17:42:29.352342  300014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19662-293874/.minikube/cert.pem (1123 bytes)
	I0917 17:42:29.352464  300014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19662-293874/.minikube/key.pem (1679 bytes)
	I0917 17:42:29.352525  300014 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19662-293874/.minikube/ca.pem (1082 bytes)
	I0917 17:42:29.352587  300014 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19662-293874/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca-key.pem org=jenkins.addons-029117 san=[127.0.0.1 192.168.49.2 addons-029117 localhost minikube]
	I0917 17:42:30.044539  300014 provision.go:177] copyRemoteCerts
	I0917 17:42:30.044620  300014 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0917 17:42:30.044670  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:30.069015  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:30.174288  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0917 17:42:30.201295  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0917 17:42:30.228724  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0917 17:42:30.253807  300014 provision.go:87] duration metric: took 917.930829ms to configureAuth
	I0917 17:42:30.253839  300014 ubuntu.go:193] setting minikube options for container-runtime
	I0917 17:42:30.254075  300014 config.go:182] Loaded profile config "addons-029117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 17:42:30.254090  300014 machine.go:96] duration metric: took 4.422751467s to provisionDockerMachine
	I0917 17:42:30.254097  300014 client.go:171] duration metric: took 12.2591098s to LocalClient.Create
	I0917 17:42:30.254118  300014 start.go:167] duration metric: took 12.259177565s to libmachine.API.Create "addons-029117"
	I0917 17:42:30.254130  300014 start.go:293] postStartSetup for "addons-029117" (driver="docker")
	I0917 17:42:30.254140  300014 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0917 17:42:30.254205  300014 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0917 17:42:30.254250  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:30.271766  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:30.372698  300014 ssh_runner.go:195] Run: cat /etc/os-release
	I0917 17:42:30.375807  300014 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0917 17:42:30.375845  300014 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0917 17:42:30.375857  300014 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0917 17:42:30.375866  300014 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0917 17:42:30.375880  300014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-293874/.minikube/addons for local assets ...
	I0917 17:42:30.375951  300014 filesync.go:126] Scanning /home/jenkins/minikube-integration/19662-293874/.minikube/files for local assets ...
	I0917 17:42:30.375983  300014 start.go:296] duration metric: took 121.846886ms for postStartSetup
	I0917 17:42:30.376325  300014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-029117
	I0917 17:42:30.392142  300014 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/config.json ...
	I0917 17:42:30.392457  300014 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:42:30.392515  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:30.408154  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:30.504750  300014 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0917 17:42:30.509368  300014 start.go:128] duration metric: took 12.517284891s to createHost
	I0917 17:42:30.509410  300014 start.go:83] releasing machines lock for "addons-029117", held for 12.51743707s
	I0917 17:42:30.509484  300014 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-029117
	I0917 17:42:30.526601  300014 ssh_runner.go:195] Run: cat /version.json
	I0917 17:42:30.526637  300014 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0917 17:42:30.526660  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:30.526718  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:30.542693  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:30.546185  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:30.762726  300014 ssh_runner.go:195] Run: systemctl --version
	I0917 17:42:30.766894  300014 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0917 17:42:30.771101  300014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0917 17:42:30.798238  300014 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0917 17:42:30.798327  300014 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0917 17:42:30.827960  300014 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0917 17:42:30.827986  300014 start.go:495] detecting cgroup driver to use...
	I0917 17:42:30.828050  300014 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0917 17:42:30.828116  300014 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0917 17:42:30.841116  300014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0917 17:42:30.853059  300014 docker.go:217] disabling cri-docker service (if available) ...
	I0917 17:42:30.853129  300014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0917 17:42:30.867544  300014 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0917 17:42:30.882434  300014 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0917 17:42:30.968796  300014 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0917 17:42:31.057096  300014 docker.go:233] disabling docker service ...
	I0917 17:42:31.057203  300014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0917 17:42:31.076963  300014 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0917 17:42:31.092938  300014 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0917 17:42:31.189443  300014 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0917 17:42:31.280427  300014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0917 17:42:31.292176  300014 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0917 17:42:31.309019  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0917 17:42:31.318631  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0917 17:42:31.328967  300014 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0917 17:42:31.329035  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0917 17:42:31.338811  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 17:42:31.348946  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0917 17:42:31.358996  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0917 17:42:31.369155  300014 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0917 17:42:31.378855  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0917 17:42:31.389151  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0917 17:42:31.398651  300014 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0917 17:42:31.408720  300014 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0917 17:42:31.417488  300014 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0917 17:42:31.426252  300014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:42:31.515738  300014 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0917 17:42:31.640292  300014 start.go:542] Will wait 60s for socket path /run/containerd/containerd.sock
	I0917 17:42:31.640456  300014 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I0917 17:42:31.644261  300014 start.go:563] Will wait 60s for crictl version
	I0917 17:42:31.644330  300014 ssh_runner.go:195] Run: which crictl
	I0917 17:42:31.647712  300014 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0917 17:42:31.687820  300014 start.go:579] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.7.22
	RuntimeApiVersion:  v1
	I0917 17:42:31.687949  300014 ssh_runner.go:195] Run: containerd --version
	I0917 17:42:31.710167  300014 ssh_runner.go:195] Run: containerd --version
	I0917 17:42:31.735775  300014 out.go:177] * Preparing Kubernetes v1.31.1 on containerd 1.7.22 ...
	I0917 17:42:31.737990  300014 cli_runner.go:164] Run: docker network inspect addons-029117 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0917 17:42:31.753928  300014 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0917 17:42:31.757460  300014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:42:31.768901  300014 kubeadm.go:883] updating cluster {Name:addons-029117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-029117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0917 17:42:31.769028  300014 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0917 17:42:31.769103  300014 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:42:31.805542  300014 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 17:42:31.805569  300014 containerd.go:534] Images already preloaded, skipping extraction
	I0917 17:42:31.805632  300014 ssh_runner.go:195] Run: sudo crictl images --output json
	I0917 17:42:31.841677  300014 containerd.go:627] all images are preloaded for containerd runtime.
	I0917 17:42:31.841755  300014 cache_images.go:84] Images are preloaded, skipping loading
	I0917 17:42:31.841768  300014 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 containerd true true} ...
	I0917 17:42:31.841863  300014 kubeadm.go:946] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-029117 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-029117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0917 17:42:31.841931  300014 ssh_runner.go:195] Run: sudo crictl info
	I0917 17:42:31.886395  300014 cni.go:84] Creating CNI manager for ""
	I0917 17:42:31.886424  300014 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 17:42:31.886436  300014 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0917 17:42:31.886459  300014 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-029117 NodeName:addons-029117 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0917 17:42:31.886597  300014 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "addons-029117"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0917 17:42:31.886674  300014 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0917 17:42:31.895543  300014 binaries.go:44] Found k8s binaries, skipping transfer
	I0917 17:42:31.895611  300014 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0917 17:42:31.904375  300014 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0917 17:42:31.924092  300014 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0917 17:42:31.943232  300014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2167 bytes)
	I0917 17:42:31.961869  300014 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0917 17:42:31.965368  300014 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0917 17:42:31.976310  300014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:42:32.065755  300014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0917 17:42:32.081605  300014 certs.go:68] Setting up /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117 for IP: 192.168.49.2
	I0917 17:42:32.081686  300014 certs.go:194] generating shared ca certs ...
	I0917 17:42:32.081716  300014 certs.go:226] acquiring lock for ca certs: {Name:mk42058f2fd2e854333a5653ef45f3026c6c2b7d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:32.081922  300014 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19662-293874/.minikube/ca.key
	I0917 17:42:33.422920  300014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-293874/.minikube/ca.crt ...
	I0917 17:42:33.422957  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/ca.crt: {Name:mk5a8c9e59ad3209c3218b36b1761a3483997dc2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:33.423153  300014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-293874/.minikube/ca.key ...
	I0917 17:42:33.423160  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/ca.key: {Name:mk427c82ea08ae5a7ec3c49f320b6ffc8babbb4b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:33.423234  300014 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19662-293874/.minikube/proxy-client-ca.key
	I0917 17:42:33.657170  300014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-293874/.minikube/proxy-client-ca.crt ...
	I0917 17:42:33.657199  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/proxy-client-ca.crt: {Name:mk1c6ff45f7050ea5bba431421eb67eaba375ee8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:33.657952  300014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-293874/.minikube/proxy-client-ca.key ...
	I0917 17:42:33.657973  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/proxy-client-ca.key: {Name:mk7239cd9b3b2a788827a197d64d49df78f157ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:33.658074  300014 certs.go:256] generating profile certs ...
	I0917 17:42:33.658137  300014 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.key
	I0917 17:42:33.658156  300014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt with IP's: []
	I0917 17:42:34.234113  300014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt ...
	I0917 17:42:34.234152  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: {Name:mk9fb5dbcc459fd48aa296f3f4574d65a27a0555 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:34.234340  300014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.key ...
	I0917 17:42:34.234353  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.key: {Name:mkfd060e524d51bd8fce3b2232f196d84c2ec62f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:34.234439  300014 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.key.3c936c2c
	I0917 17:42:34.234458  300014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.crt.3c936c2c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0917 17:42:34.961025  300014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.crt.3c936c2c ...
	I0917 17:42:34.961062  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.crt.3c936c2c: {Name:mk814e0392f3188f7d1b8f0d6837ed7688690e21 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:34.961249  300014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.key.3c936c2c ...
	I0917 17:42:34.961263  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.key.3c936c2c: {Name:mk65731fea78097b0bb84fc99a76d2f6c72d1105 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:34.961352  300014 certs.go:381] copying /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.crt.3c936c2c -> /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.crt
	I0917 17:42:34.961432  300014 certs.go:385] copying /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.key.3c936c2c -> /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.key
	I0917 17:42:34.961486  300014 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/proxy-client.key
	I0917 17:42:34.961506  300014 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/proxy-client.crt with IP's: []
	I0917 17:42:35.963314  300014 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/proxy-client.crt ...
	I0917 17:42:35.963349  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/proxy-client.crt: {Name:mk7d7e4588cf380fdca2a7bbd9fa1dd04d2de4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:35.963545  300014 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/proxy-client.key ...
	I0917 17:42:35.963562  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/proxy-client.key: {Name:mk2112e01a27f0cf531c212a41768ffe14df74cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:35.963772  300014 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca-key.pem (1675 bytes)
	I0917 17:42:35.963815  300014 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/ca.pem (1082 bytes)
	I0917 17:42:35.963841  300014 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/cert.pem (1123 bytes)
	I0917 17:42:35.963865  300014 certs.go:484] found cert: /home/jenkins/minikube-integration/19662-293874/.minikube/certs/key.pem (1679 bytes)
	I0917 17:42:35.964474  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0917 17:42:35.996301  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0917 17:42:36.035976  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0917 17:42:36.068036  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0917 17:42:36.094807  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0917 17:42:36.121205  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0917 17:42:36.145848  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0917 17:42:36.171104  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0917 17:42:36.195800  300014 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19662-293874/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0917 17:42:36.219455  300014 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0917 17:42:36.237716  300014 ssh_runner.go:195] Run: openssl version
	I0917 17:42:36.243037  300014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0917 17:42:36.252471  300014 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:42:36.256092  300014 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 17 17:42 /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:42:36.256197  300014 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0917 17:42:36.263221  300014 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0917 17:42:36.272425  300014 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0917 17:42:36.275777  300014 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0917 17:42:36.275843  300014 kubeadm.go:392] StartCluster: {Name:addons-029117 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-029117 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:42:36.275939  300014 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0917 17:42:36.276009  300014 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0917 17:42:36.311695  300014 cri.go:89] found id: ""
	I0917 17:42:36.311772  300014 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0917 17:42:36.320652  300014 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0917 17:42:36.329434  300014 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0917 17:42:36.329499  300014 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0917 17:42:36.337971  300014 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0917 17:42:36.337996  300014 kubeadm.go:157] found existing configuration files:
	
	I0917 17:42:36.338056  300014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0917 17:42:36.346566  300014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0917 17:42:36.346630  300014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0917 17:42:36.354912  300014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0917 17:42:36.363429  300014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0917 17:42:36.363492  300014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0917 17:42:36.371666  300014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0917 17:42:36.380219  300014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0917 17:42:36.380325  300014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0917 17:42:36.388354  300014 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0917 17:42:36.397043  300014 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0917 17:42:36.397119  300014 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0917 17:42:36.405359  300014 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0917 17:42:36.447631  300014 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0917 17:42:36.447882  300014 kubeadm.go:310] [preflight] Running pre-flight checks
	I0917 17:42:36.466312  300014 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0917 17:42:36.466429  300014 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1070-aws
	I0917 17:42:36.466499  300014 kubeadm.go:310] OS: Linux
	I0917 17:42:36.466562  300014 kubeadm.go:310] CGROUPS_CPU: enabled
	I0917 17:42:36.466629  300014 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0917 17:42:36.466696  300014 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0917 17:42:36.466762  300014 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0917 17:42:36.466828  300014 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0917 17:42:36.466897  300014 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0917 17:42:36.466960  300014 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0917 17:42:36.467023  300014 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0917 17:42:36.467094  300014 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0917 17:42:36.523597  300014 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0917 17:42:36.523776  300014 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0917 17:42:36.523893  300014 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0917 17:42:36.529838  300014 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0917 17:42:36.532811  300014 out.go:235]   - Generating certificates and keys ...
	I0917 17:42:36.532968  300014 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0917 17:42:36.533103  300014 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0917 17:42:36.886057  300014 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0917 17:42:38.107782  300014 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0917 17:42:38.431553  300014 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0917 17:42:38.686472  300014 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0917 17:42:39.384556  300014 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0917 17:42:39.384841  300014 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-029117 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 17:42:40.067950  300014 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0917 17:42:40.068230  300014 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-029117 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0917 17:42:40.887912  300014 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0917 17:42:41.327165  300014 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0917 17:42:41.646560  300014 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0917 17:42:41.646899  300014 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0917 17:42:42.428624  300014 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0917 17:42:42.612608  300014 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0917 17:42:43.434350  300014 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0917 17:42:44.221005  300014 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0917 17:42:44.684988  300014 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0917 17:42:44.685776  300014 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0917 17:42:44.688751  300014 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0917 17:42:44.691096  300014 out.go:235]   - Booting up control plane ...
	I0917 17:42:44.691196  300014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0917 17:42:44.691276  300014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0917 17:42:44.691963  300014 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0917 17:42:44.702952  300014 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0917 17:42:44.709425  300014 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0917 17:42:44.709482  300014 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0917 17:42:44.805641  300014 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0917 17:42:44.805793  300014 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0917 17:42:46.307170  300014 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.501575653s
	I0917 17:42:46.307279  300014 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0917 17:42:53.309431  300014 kubeadm.go:310] [api-check] The API server is healthy after 7.001894258s
	I0917 17:42:53.329309  300014 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0917 17:42:53.342140  300014 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0917 17:42:53.365593  300014 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0917 17:42:53.366422  300014 kubeadm.go:310] [mark-control-plane] Marking the node addons-029117 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0917 17:42:53.377545  300014 kubeadm.go:310] [bootstrap-token] Using token: 7j065k.1vkfhnrnu5prsqtr
	I0917 17:42:53.379579  300014 out.go:235]   - Configuring RBAC rules ...
	I0917 17:42:53.379720  300014 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0917 17:42:53.384280  300014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0917 17:42:53.393456  300014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0917 17:42:53.396957  300014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0917 17:42:53.401060  300014 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0917 17:42:53.404609  300014 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0917 17:42:53.718373  300014 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0917 17:42:54.148427  300014 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0917 17:42:54.716761  300014 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0917 17:42:54.721710  300014 kubeadm.go:310] 
	I0917 17:42:54.721800  300014 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0917 17:42:54.721814  300014 kubeadm.go:310] 
	I0917 17:42:54.721891  300014 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0917 17:42:54.721900  300014 kubeadm.go:310] 
	I0917 17:42:54.721925  300014 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0917 17:42:54.721987  300014 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0917 17:42:54.722047  300014 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0917 17:42:54.722056  300014 kubeadm.go:310] 
	I0917 17:42:54.722112  300014 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0917 17:42:54.722121  300014 kubeadm.go:310] 
	I0917 17:42:54.722168  300014 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0917 17:42:54.722183  300014 kubeadm.go:310] 
	I0917 17:42:54.722234  300014 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0917 17:42:54.722312  300014 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0917 17:42:54.722384  300014 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0917 17:42:54.722392  300014 kubeadm.go:310] 
	I0917 17:42:54.722475  300014 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0917 17:42:54.722555  300014 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0917 17:42:54.722563  300014 kubeadm.go:310] 
	I0917 17:42:54.722645  300014 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 7j065k.1vkfhnrnu5prsqtr \
	I0917 17:42:54.722750  300014 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1a3aaa098b14bc56a3a1280b3c664b297fe0b1f80cc32bacf7896dc212d2b2a8 \
	I0917 17:42:54.722774  300014 kubeadm.go:310] 	--control-plane 
	I0917 17:42:54.722779  300014 kubeadm.go:310] 
	I0917 17:42:54.722862  300014 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0917 17:42:54.722870  300014 kubeadm.go:310] 
	I0917 17:42:54.722951  300014 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 7j065k.1vkfhnrnu5prsqtr \
	I0917 17:42:54.723055  300014 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:1a3aaa098b14bc56a3a1280b3c664b297fe0b1f80cc32bacf7896dc212d2b2a8 
	I0917 17:42:54.727111  300014 kubeadm.go:310] W0917 17:42:36.444600    1038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 17:42:54.727417  300014 kubeadm.go:310] W0917 17:42:36.445413    1038 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0917 17:42:54.727632  300014 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1070-aws\n", err: exit status 1
	I0917 17:42:54.727766  300014 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0917 17:42:54.727787  300014 cni.go:84] Creating CNI manager for ""
	I0917 17:42:54.727800  300014 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 17:42:54.730014  300014 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0917 17:42:54.732226  300014 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0917 17:42:54.736085  300014 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0917 17:42:54.736106  300014 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0917 17:42:54.755033  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0917 17:42:55.046920  300014 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0917 17:42:55.047074  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:55.047177  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-029117 minikube.k8s.io/updated_at=2024_09_17T17_42_55_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce minikube.k8s.io/name=addons-029117 minikube.k8s.io/primary=true
	I0917 17:42:55.061233  300014 ops.go:34] apiserver oom_adj: -16
	I0917 17:42:55.256780  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:55.757314  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:56.257198  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:56.757700  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:57.256836  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:57.757095  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:58.256925  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:58.757542  300014 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0917 17:42:58.859450  300014 kubeadm.go:1113] duration metric: took 3.812431385s to wait for elevateKubeSystemPrivileges
	I0917 17:42:58.859485  300014 kubeadm.go:394] duration metric: took 22.583649376s to StartCluster
	I0917 17:42:58.859504  300014 settings.go:142] acquiring lock: {Name:mk090161dee408228fecc4c15998216facf5ac36 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:58.859634  300014 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19662-293874/kubeconfig
	I0917 17:42:58.860044  300014 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/kubeconfig: {Name:mk8304d00697f30c3bf0d088840f6baa2220981b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:42:58.860250  300014 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I0917 17:42:58.860412  300014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0917 17:42:58.860683  300014 config.go:182] Loaded profile config "addons-029117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 17:42:58.860655  300014 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0917 17:42:58.860770  300014 addons.go:69] Setting yakd=true in profile "addons-029117"
	I0917 17:42:58.860776  300014 addons.go:69] Setting inspektor-gadget=true in profile "addons-029117"
	I0917 17:42:58.860786  300014 addons.go:234] Setting addon yakd=true in "addons-029117"
	I0917 17:42:58.860791  300014 addons.go:234] Setting addon inspektor-gadget=true in "addons-029117"
	I0917 17:42:58.860814  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.860822  300014 addons.go:69] Setting metrics-server=true in profile "addons-029117"
	I0917 17:42:58.860832  300014 addons.go:234] Setting addon metrics-server=true in "addons-029117"
	I0917 17:42:58.860846  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.861345  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.860814  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.861712  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.862292  300014 addons.go:69] Setting cloud-spanner=true in profile "addons-029117"
	I0917 17:42:58.862319  300014 addons.go:234] Setting addon cloud-spanner=true in "addons-029117"
	I0917 17:42:58.862350  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.862785  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.864128  300014 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-029117"
	I0917 17:42:58.864260  300014 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-029117"
	I0917 17:42:58.864423  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.865787  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.871401  300014 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-029117"
	I0917 17:42:58.871476  300014 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-029117"
	I0917 17:42:58.871509  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.872003  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.861347  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.868100  300014 addons.go:69] Setting registry=true in profile "addons-029117"
	I0917 17:42:58.881241  300014 addons.go:234] Setting addon registry=true in "addons-029117"
	I0917 17:42:58.881291  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.881805  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.882331  300014 addons.go:69] Setting default-storageclass=true in profile "addons-029117"
	I0917 17:42:58.882352  300014 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-029117"
	I0917 17:42:58.882637  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.868135  300014 addons.go:69] Setting storage-provisioner=true in profile "addons-029117"
	I0917 17:42:58.890491  300014 addons.go:234] Setting addon storage-provisioner=true in "addons-029117"
	I0917 17:42:58.890538  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.891039  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.895358  300014 addons.go:69] Setting gcp-auth=true in profile "addons-029117"
	I0917 17:42:58.895399  300014 mustload.go:65] Loading cluster: addons-029117
	I0917 17:42:58.895597  300014 config.go:182] Loaded profile config "addons-029117": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 17:42:58.895890  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.868143  300014 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-029117"
	I0917 17:42:58.903792  300014 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-029117"
	I0917 17:42:58.904161  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.922766  300014 addons.go:69] Setting ingress=true in profile "addons-029117"
	I0917 17:42:58.922797  300014 addons.go:234] Setting addon ingress=true in "addons-029117"
	I0917 17:42:58.922843  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.923351  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.868150  300014 addons.go:69] Setting volcano=true in profile "addons-029117"
	I0917 17:42:58.925224  300014 addons.go:234] Setting addon volcano=true in "addons-029117"
	I0917 17:42:58.925266  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.925749  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.949344  300014 addons.go:69] Setting ingress-dns=true in profile "addons-029117"
	I0917 17:42:58.949374  300014 addons.go:234] Setting addon ingress-dns=true in "addons-029117"
	I0917 17:42:58.949426  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.949917  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.868157  300014 addons.go:69] Setting volumesnapshots=true in profile "addons-029117"
	I0917 17:42:58.951861  300014 addons.go:234] Setting addon volumesnapshots=true in "addons-029117"
	I0917 17:42:58.951907  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:58.952375  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:58.868209  300014 out.go:177] * Verifying Kubernetes components...
	I0917 17:42:59.011270  300014 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0917 17:42:59.088082  300014 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0917 17:42:59.089455  300014 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0917 17:42:59.089471  300014 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0917 17:42:59.089553  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.118868  300014 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0917 17:42:59.123006  300014 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 17:42:59.123038  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0917 17:42:59.123124  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.133288  300014 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0917 17:42:59.136302  300014 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.23
	I0917 17:42:59.136811  300014 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0917 17:42:59.136848  300014 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0917 17:42:59.136928  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.146446  300014 addons.go:234] Setting addon default-storageclass=true in "addons-029117"
	I0917 17:42:59.151839  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:59.152433  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:59.147158  300014 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0917 17:42:59.147168  300014 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0917 17:42:59.177965  300014 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0917 17:42:59.177999  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0917 17:42:59.178097  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.181201  300014 out.go:177]   - Using image docker.io/volcanosh/vc-webhook-manager:v1.9.0
	I0917 17:42:59.207399  300014 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0917 17:42:59.207463  300014 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0917 17:42:59.207545  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.212758  300014 out.go:177]   - Using image docker.io/volcanosh/vc-controller-manager:v1.9.0
	I0917 17:42:59.214427  300014 out.go:177]   - Using image docker.io/volcanosh/vc-scheduler:v1.9.0
	I0917 17:42:59.217232  300014 addons.go:431] installing /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 17:42:59.217292  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volcano-deployment.yaml (434001 bytes)
	I0917 17:42:59.217376  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.229428  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0917 17:42:59.230677  300014 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-029117"
	I0917 17:42:59.230715  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:59.231139  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:42:59.207191  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:42:59.240029  300014 out.go:177]   - Using image docker.io/registry:2.8.3
	I0917 17:42:59.242111  300014 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0917 17:42:59.242172  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0917 17:42:59.242268  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.258686  300014 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0917 17:42:59.267528  300014 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 17:42:59.267559  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0917 17:42:59.267625  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.284870  300014 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 17:42:59.291828  300014 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0917 17:42:59.293955  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0917 17:42:59.294067  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.295290  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.298512  300014 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0917 17:42:59.298538  300014 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0917 17:42:59.298603  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.307937  300014 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0917 17:42:59.308143  300014 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 17:42:59.308626  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.309103  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0917 17:42:59.313275  300014 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 17:42:59.313300  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0917 17:42:59.313368  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.325189  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0917 17:42:59.330333  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0917 17:42:59.333645  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0917 17:42:59.339577  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0917 17:42:59.341181  300014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:42:59.341205  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0917 17:42:59.341274  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.354148  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.363884  300014 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0917 17:42:59.363906  300014 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0917 17:42:59.363976  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.372482  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0917 17:42:59.372636  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.374732  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.380265  300014 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0917 17:42:59.384236  300014 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0917 17:42:59.384271  300014 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0917 17:42:59.384343  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.386193  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.461588  300014 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0917 17:42:59.463613  300014 out.go:177]   - Using image docker.io/busybox:stable
	I0917 17:42:59.465539  300014 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 17:42:59.465561  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0917 17:42:59.465625  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:42:59.471282  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.480455  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.513337  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.515714  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.522788  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.525845  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:42:59.536201  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	W0917 17:42:59.537508  300014 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 17:42:59.537534  300014 retry.go:31] will retry after 182.180507ms: ssh: handshake failed: EOF
	W0917 17:42:59.543538  300014 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 17:42:59.543566  300014 retry.go:31] will retry after 264.524185ms: ssh: handshake failed: EOF
	I0917 17:42:59.780100  300014 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0917 17:42:59.780278  300014 ssh_runner.go:195] Run: sudo systemctl start kubelet
	W0917 17:42:59.809765  300014 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 17:42:59.809844  300014 retry.go:31] will retry after 209.673331ms: ssh: handshake failed: EOF
	W0917 17:43:00.031235  300014 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0917 17:43:00.031271  300014 retry.go:31] will retry after 338.368677ms: ssh: handshake failed: EOF
	I0917 17:43:00.278197  300014 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0917 17:43:00.278224  300014 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0917 17:43:00.487298  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0917 17:43:00.500676  300014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0917 17:43:00.500755  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0917 17:43:00.509359  300014 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0917 17:43:00.509448  300014 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0917 17:43:00.521608  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0917 17:43:00.528642  300014 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0917 17:43:00.528732  300014 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0917 17:43:00.567504  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml
	I0917 17:43:00.603694  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0917 17:43:00.610754  300014 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0917 17:43:00.610783  300014 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0917 17:43:00.702202  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0917 17:43:00.709761  300014 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0917 17:43:00.709789  300014 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0917 17:43:00.713228  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0917 17:43:00.770625  300014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0917 17:43:00.770654  300014 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0917 17:43:00.778180  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0917 17:43:00.790327  300014 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0917 17:43:00.790355  300014 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0917 17:43:00.816517  300014 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0917 17:43:00.816556  300014 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0917 17:43:00.851128  300014 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0917 17:43:00.851151  300014 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0917 17:43:00.896732  300014 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0917 17:43:00.896760  300014 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0917 17:43:00.981547  300014 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0917 17:43:00.981574  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0917 17:43:01.018288  300014 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0917 17:43:01.018317  300014 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0917 17:43:01.038337  300014 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0917 17:43:01.038371  300014 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0917 17:43:01.039299  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0917 17:43:01.044649  300014 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 17:43:01.044677  300014 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0917 17:43:01.061122  300014 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0917 17:43:01.061151  300014 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0917 17:43:01.067051  300014 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0917 17:43:01.067078  300014 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0917 17:43:01.262269  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0917 17:43:01.340947  300014 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0917 17:43:01.340976  300014 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0917 17:43:01.385637  300014 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0917 17:43:01.385676  300014 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0917 17:43:01.390979  300014 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0917 17:43:01.391001  300014 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0917 17:43:01.413285  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0917 17:43:01.467109  300014 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0917 17:43:01.467139  300014 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0917 17:43:01.496209  300014 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0917 17:43:01.496237  300014 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0917 17:43:01.582135  300014 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0917 17:43:01.582160  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0917 17:43:01.608375  300014 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 17:43:01.608401  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0917 17:43:01.739154  300014 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0917 17:43:01.739179  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0917 17:43:01.900604  300014 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0917 17:43:01.900633  300014 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0917 17:43:01.903060  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0917 17:43:01.960282  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 17:43:02.091203  300014 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0917 17:43:02.091232  300014 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0917 17:43:02.216392  300014 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.436069518s)
	I0917 17:43:02.217197  300014 node_ready.go:35] waiting up to 6m0s for node "addons-029117" to be "Ready" ...
	I0917 17:43:02.217461  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.730083263s)
	I0917 17:43:02.217516  300014 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.437349994s)
	I0917 17:43:02.217529  300014 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0917 17:43:02.228471  300014 node_ready.go:49] node "addons-029117" has status "Ready":"True"
	I0917 17:43:02.228501  300014 node_ready.go:38] duration metric: took 11.276532ms for node "addons-029117" to be "Ready" ...
	I0917 17:43:02.228513  300014 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:43:02.233571  300014 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 17:43:02.233597  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0917 17:43:02.247992  300014 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-l9wjv" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:02.461269  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0917 17:43:02.472474  300014 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0917 17:43:02.472503  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0917 17:43:02.735628  300014 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-029117" context rescaled to 1 replicas
	I0917 17:43:02.747862  300014 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0917 17:43:02.747893  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0917 17:43:03.034080  300014 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 17:43:03.034106  300014 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0917 17:43:03.339575  300014 pod_ready.go:98] pod "coredns-7c65d6cfc9-l9wjv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason: Message:}] Messa
ge: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-09-17 17:42:59 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID: ContainerID: Started:0x4001e630ca AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001eecc80} {Name:kube-api-access-qzjw9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x4001eecc90}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Res
ize: ResourceClaimStatuses:[]}
	I0917 17:43:03.339609  300014 pod_ready.go:82] duration metric: took 1.091581587s for pod "coredns-7c65d6cfc9-l9wjv" in "kube-system" namespace to be "Ready" ...
	E0917 17:43:03.339622  300014 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-l9wjv" in "kube-system" namespace has status phase "Failed" (skipping!): {Phase:Failed Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [coredns]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-09-17 17:42:59 +0000
UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.49.2 HostIPs:[{IP:192.168.49.2}] PodIP: PodIPs:[] StartTime:2024-09-17 17:42:59 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:137,Signal:0,Reason:ContainerStatusUnknown,Message:The container could not be located when the pod was terminated,StartedAt:0001-01-01 00:00:00 +0000 UTC,FinishedAt:0001-01-01 00:00:00 +0000 UTC,ContainerID:,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID: ContainerID: Started:0x4001e630ca AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0x4001eecc80} {Name:kube-api-access-qzjw9 MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0x4001eecc90}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable Ephe
meralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0917 17:43:03.339635  300014 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mnv4g" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:03.401965  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0917 17:43:05.349859  300014 pod_ready.go:103] pod "coredns-7c65d6cfc9-mnv4g" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:06.566497  300014 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0917 17:43:06.566586  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:43:06.599768  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:43:07.225929  300014 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0917 17:43:07.382989  300014 pod_ready.go:103] pod "coredns-7c65d6cfc9-mnv4g" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:07.459879  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.938185332s)
	I0917 17:43:07.459932  300014 addons.go:475] Verifying addon ingress=true in "addons-029117"
	I0917 17:43:07.464317  300014 out.go:177] * Verifying ingress addon...
	I0917 17:43:07.467004  300014 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0917 17:43:07.471064  300014 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0917 17:43:07.471089  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:07.496148  300014 addons.go:234] Setting addon gcp-auth=true in "addons-029117"
	I0917 17:43:07.496201  300014 host.go:66] Checking if "addons-029117" exists ...
	I0917 17:43:07.496715  300014 cli_runner.go:164] Run: docker container inspect addons-029117 --format={{.State.Status}}
	I0917 17:43:07.518563  300014 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0917 17:43:07.518619  300014 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-029117
	I0917 17:43:07.545650  300014 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33138 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/addons-029117/id_rsa Username:docker}
	I0917 17:43:07.972836  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:08.498674  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:08.976455  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:09.532033  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:09.604779  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/volcano-deployment.yaml: (9.037227843s)
	I0917 17:43:09.604852  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (9.001135632s)
	I0917 17:43:09.604887  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.902660648s)
	I0917 17:43:09.605100  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.891846218s)
	I0917 17:43:09.605169  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.826965361s)
	I0917 17:43:09.605222  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.565903113s)
	I0917 17:43:09.605332  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.343034358s)
	I0917 17:43:09.605351  300014 addons.go:475] Verifying addon registry=true in "addons-029117"
	I0917 17:43:09.605545  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.192230921s)
	I0917 17:43:09.605565  300014 addons.go:475] Verifying addon metrics-server=true in "addons-029117"
	I0917 17:43:09.605673  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.702586908s)
	I0917 17:43:09.605914  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.645597443s)
	W0917 17:43:09.605974  300014 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 17:43:09.605999  300014 retry.go:31] will retry after 203.91565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0917 17:43:09.606090  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.144786886s)
	I0917 17:43:09.609688  300014 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-029117 service yakd-dashboard -n yakd-dashboard
	
	I0917 17:43:09.609692  300014 out.go:177] * Verifying registry addon...
	I0917 17:43:09.613309  300014 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0917 17:43:09.669777  300014 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0917 17:43:09.669858  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0917 17:43:09.686292  300014 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0917 17:43:09.810900  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0917 17:43:09.893838  300014 pod_ready.go:103] pod "coredns-7c65d6cfc9-mnv4g" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:09.985503  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:10.124216  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:10.366372  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (6.964354593s)
	I0917 17:43:10.366548  300014 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-029117"
	I0917 17:43:10.366504  300014 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.847915726s)
	I0917 17:43:10.368764  300014 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0917 17:43:10.368768  300014 out.go:177] * Verifying csi-hostpath-driver addon...
	I0917 17:43:10.370909  300014 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0917 17:43:10.371762  300014 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0917 17:43:10.374345  300014 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0917 17:43:10.374371  300014 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0917 17:43:10.388538  300014 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0917 17:43:10.388617  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:10.450389  300014 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0917 17:43:10.450413  300014 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0917 17:43:10.483675  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:10.549840  300014 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 17:43:10.549959  300014 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0917 17:43:10.583307  300014 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0917 17:43:10.622243  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:10.877179  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:10.971352  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:11.117385  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:11.377443  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:11.472683  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:11.644916  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:11.684477  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.873488518s)
	I0917 17:43:11.684626  300014 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.101248483s)
	I0917 17:43:11.688010  300014 addons.go:475] Verifying addon gcp-auth=true in "addons-029117"
	I0917 17:43:11.689919  300014 out.go:177] * Verifying gcp-auth addon...
	I0917 17:43:11.693365  300014 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0917 17:43:11.741789  300014 kapi.go:86] Found 0 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 17:43:11.876989  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:11.971329  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:12.117657  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:12.346738  300014 pod_ready.go:103] pod "coredns-7c65d6cfc9-mnv4g" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:12.376298  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:12.471490  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:12.623190  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:12.878418  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:12.972168  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:13.119638  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:13.378325  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:13.474015  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:13.620656  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:13.877914  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:13.975200  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:14.118608  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:14.347843  300014 pod_ready.go:103] pod "coredns-7c65d6cfc9-mnv4g" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:14.377347  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:14.472442  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:14.626391  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:14.877182  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:14.971712  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:15.118844  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:15.377178  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:15.481312  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:15.619006  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:15.877414  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:15.971906  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:16.118259  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:16.377543  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:16.479104  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:16.628687  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:16.845504  300014 pod_ready.go:93] pod "coredns-7c65d6cfc9-mnv4g" in "kube-system" namespace has status "Ready":"True"
	I0917 17:43:16.845532  300014 pod_ready.go:82] duration metric: took 13.50588894s for pod "coredns-7c65d6cfc9-mnv4g" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.845544  300014 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-029117" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.850866  300014 pod_ready.go:93] pod "etcd-addons-029117" in "kube-system" namespace has status "Ready":"True"
	I0917 17:43:16.850894  300014 pod_ready.go:82] duration metric: took 5.342001ms for pod "etcd-addons-029117" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.850910  300014 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-029117" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.855774  300014 pod_ready.go:93] pod "kube-apiserver-addons-029117" in "kube-system" namespace has status "Ready":"True"
	I0917 17:43:16.855799  300014 pod_ready.go:82] duration metric: took 4.881712ms for pod "kube-apiserver-addons-029117" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.855813  300014 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-029117" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.861298  300014 pod_ready.go:93] pod "kube-controller-manager-addons-029117" in "kube-system" namespace has status "Ready":"True"
	I0917 17:43:16.861323  300014 pod_ready.go:82] duration metric: took 5.501754ms for pod "kube-controller-manager-addons-029117" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.861336  300014 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9kqt4" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.866671  300014 pod_ready.go:93] pod "kube-proxy-9kqt4" in "kube-system" namespace has status "Ready":"True"
	I0917 17:43:16.866697  300014 pod_ready.go:82] duration metric: took 5.353398ms for pod "kube-proxy-9kqt4" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.866708  300014 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-029117" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:16.876794  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:16.972101  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:17.117677  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:17.243926  300014 pod_ready.go:93] pod "kube-scheduler-addons-029117" in "kube-system" namespace has status "Ready":"True"
	I0917 17:43:17.243952  300014 pod_ready.go:82] duration metric: took 377.235973ms for pod "kube-scheduler-addons-029117" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:17.243967  300014 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-892xz" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:17.376223  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:17.475089  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:17.619505  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:17.889012  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:17.972092  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:18.117890  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:18.379882  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:18.482351  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:18.620703  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:18.876922  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:18.970977  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:19.118261  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:19.250362  300014 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-892xz" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:19.376825  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:19.471787  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:19.620464  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:19.878289  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:19.977968  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:20.117965  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:20.376678  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:20.475786  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:20.621924  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:20.876745  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:20.971412  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:21.118246  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:21.252141  300014 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-892xz" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:21.377000  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:21.471803  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:21.620348  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:21.876541  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:21.972201  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:22.117866  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:22.377145  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:22.477984  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:22.623710  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:22.877495  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:22.972208  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:23.119515  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:23.377871  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:23.471928  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:23.620685  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:23.751135  300014 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-892xz" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:23.876965  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:23.971690  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:24.118296  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:24.378218  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:24.471491  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:24.620273  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:24.877302  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:24.977699  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:25.117523  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:25.376948  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:25.472368  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:25.624805  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:25.751218  300014 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-892xz" in "kube-system" namespace has status "Ready":"False"
	I0917 17:43:25.876634  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:25.973304  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:26.117136  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:26.377472  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:26.471591  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:26.619968  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:26.750628  300014 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-892xz" in "kube-system" namespace has status "Ready":"True"
	I0917 17:43:26.750656  300014 pod_ready.go:82] duration metric: took 9.506681023s for pod "nvidia-device-plugin-daemonset-892xz" in "kube-system" namespace to be "Ready" ...
	I0917 17:43:26.750667  300014 pod_ready.go:39] duration metric: took 24.522141511s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0917 17:43:26.750682  300014 api_server.go:52] waiting for apiserver process to appear ...
	I0917 17:43:26.750794  300014 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:43:26.765937  300014 api_server.go:72] duration metric: took 27.905648645s to wait for apiserver process to appear ...
	I0917 17:43:26.765965  300014 api_server.go:88] waiting for apiserver healthz status ...
	I0917 17:43:26.765992  300014 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0917 17:43:26.774630  300014 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0917 17:43:26.775623  300014 api_server.go:141] control plane version: v1.31.1
	I0917 17:43:26.775677  300014 api_server.go:131] duration metric: took 9.675688ms to wait for apiserver health ...
	I0917 17:43:26.775689  300014 system_pods.go:43] waiting for kube-system pods to appear ...
	I0917 17:43:26.784863  300014 system_pods.go:59] 18 kube-system pods found
	I0917 17:43:26.784901  300014 system_pods.go:61] "coredns-7c65d6cfc9-mnv4g" [1a2eedf7-2d0b-4a23-a146-28410ac1b975] Running
	I0917 17:43:26.784914  300014 system_pods.go:61] "csi-hostpath-attacher-0" [be07333a-a124-4998-be3a-578964572f8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 17:43:26.784920  300014 system_pods.go:61] "csi-hostpath-resizer-0" [0fec3e59-8705-4dea-88f4-b962dc85b8fe] Running
	I0917 17:43:26.784931  300014 system_pods.go:61] "csi-hostpathplugin-ssw2p" [a3a8d190-8c0d-48b6-a58f-91ec87b42a7a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 17:43:26.784937  300014 system_pods.go:61] "etcd-addons-029117" [9ff8de15-d11f-4d82-8a9f-607487281111] Running
	I0917 17:43:26.784946  300014 system_pods.go:61] "kindnet-fcdrs" [372e4f34-6e4e-4f25-bcc8-8728e7171480] Running
	I0917 17:43:26.784951  300014 system_pods.go:61] "kube-apiserver-addons-029117" [73e29076-414b-4141-9632-03fdaac3b82e] Running
	I0917 17:43:26.784958  300014 system_pods.go:61] "kube-controller-manager-addons-029117" [e8966db3-9fbc-4d86-8641-b7cef3e8b828] Running
	I0917 17:43:26.784964  300014 system_pods.go:61] "kube-ingress-dns-minikube" [18eafc56-faef-4505-b3b2-c277aba4b96d] Running
	I0917 17:43:26.784972  300014 system_pods.go:61] "kube-proxy-9kqt4" [03fc2d44-d7d9-415c-81ca-228ccc5d2a8c] Running
	I0917 17:43:26.784977  300014 system_pods.go:61] "kube-scheduler-addons-029117" [94df3155-c579-495c-97e8-593b8b414815] Running
	I0917 17:43:26.784991  300014 system_pods.go:61] "metrics-server-84c5f94fbc-smwfl" [1bd3cf63-4339-4e32-943d-b6ae1e132815] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 17:43:26.784996  300014 system_pods.go:61] "nvidia-device-plugin-daemonset-892xz" [3b344a6c-9dad-48ec-a4e2-e27d6e6e7441] Running
	I0917 17:43:26.785004  300014 system_pods.go:61] "registry-66c9cd494c-v75rj" [a00533f6-30fb-4a16-80c5-b954680ad8a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 17:43:26.785013  300014 system_pods.go:61] "registry-proxy-h97gf" [b67fdfc4-4458-42f7-b829-1e941d38171a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 17:43:26.785022  300014 system_pods.go:61] "snapshot-controller-56fcc65765-hlt8k" [faf1d026-26ca-4371-bab9-806ef812c9c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 17:43:26.785029  300014 system_pods.go:61] "snapshot-controller-56fcc65765-k6pgl" [96410136-65ce-4221-9f59-44008c66cbf9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 17:43:26.785036  300014 system_pods.go:61] "storage-provisioner" [05c86738-7723-437a-8b6a-5f14420bb3a2] Running
	I0917 17:43:26.785042  300014 system_pods.go:74] duration metric: took 9.347821ms to wait for pod list to return data ...
	I0917 17:43:26.785052  300014 default_sa.go:34] waiting for default service account to be created ...
	I0917 17:43:26.787953  300014 default_sa.go:45] found service account: "default"
	I0917 17:43:26.787981  300014 default_sa.go:55] duration metric: took 2.921255ms for default service account to be created ...
	I0917 17:43:26.787992  300014 system_pods.go:116] waiting for k8s-apps to be running ...
	I0917 17:43:26.797551  300014 system_pods.go:86] 18 kube-system pods found
	I0917 17:43:26.797591  300014 system_pods.go:89] "coredns-7c65d6cfc9-mnv4g" [1a2eedf7-2d0b-4a23-a146-28410ac1b975] Running
	I0917 17:43:26.797603  300014 system_pods.go:89] "csi-hostpath-attacher-0" [be07333a-a124-4998-be3a-578964572f8f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0917 17:43:26.797610  300014 system_pods.go:89] "csi-hostpath-resizer-0" [0fec3e59-8705-4dea-88f4-b962dc85b8fe] Running
	I0917 17:43:26.797619  300014 system_pods.go:89] "csi-hostpathplugin-ssw2p" [a3a8d190-8c0d-48b6-a58f-91ec87b42a7a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0917 17:43:26.797623  300014 system_pods.go:89] "etcd-addons-029117" [9ff8de15-d11f-4d82-8a9f-607487281111] Running
	I0917 17:43:26.797637  300014 system_pods.go:89] "kindnet-fcdrs" [372e4f34-6e4e-4f25-bcc8-8728e7171480] Running
	I0917 17:43:26.797642  300014 system_pods.go:89] "kube-apiserver-addons-029117" [73e29076-414b-4141-9632-03fdaac3b82e] Running
	I0917 17:43:26.797655  300014 system_pods.go:89] "kube-controller-manager-addons-029117" [e8966db3-9fbc-4d86-8641-b7cef3e8b828] Running
	I0917 17:43:26.797659  300014 system_pods.go:89] "kube-ingress-dns-minikube" [18eafc56-faef-4505-b3b2-c277aba4b96d] Running
	I0917 17:43:26.797665  300014 system_pods.go:89] "kube-proxy-9kqt4" [03fc2d44-d7d9-415c-81ca-228ccc5d2a8c] Running
	I0917 17:43:26.797672  300014 system_pods.go:89] "kube-scheduler-addons-029117" [94df3155-c579-495c-97e8-593b8b414815] Running
	I0917 17:43:26.797678  300014 system_pods.go:89] "metrics-server-84c5f94fbc-smwfl" [1bd3cf63-4339-4e32-943d-b6ae1e132815] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0917 17:43:26.797683  300014 system_pods.go:89] "nvidia-device-plugin-daemonset-892xz" [3b344a6c-9dad-48ec-a4e2-e27d6e6e7441] Running
	I0917 17:43:26.797694  300014 system_pods.go:89] "registry-66c9cd494c-v75rj" [a00533f6-30fb-4a16-80c5-b954680ad8a1] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0917 17:43:26.797700  300014 system_pods.go:89] "registry-proxy-h97gf" [b67fdfc4-4458-42f7-b829-1e941d38171a] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0917 17:43:26.797711  300014 system_pods.go:89] "snapshot-controller-56fcc65765-hlt8k" [faf1d026-26ca-4371-bab9-806ef812c9c9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 17:43:26.797718  300014 system_pods.go:89] "snapshot-controller-56fcc65765-k6pgl" [96410136-65ce-4221-9f59-44008c66cbf9] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0917 17:43:26.797729  300014 system_pods.go:89] "storage-provisioner" [05c86738-7723-437a-8b6a-5f14420bb3a2] Running
	I0917 17:43:26.797737  300014 system_pods.go:126] duration metric: took 9.739458ms to wait for k8s-apps to be running ...
	I0917 17:43:26.797749  300014 system_svc.go:44] waiting for kubelet service to be running ....
	I0917 17:43:26.797807  300014 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:43:26.810153  300014 system_svc.go:56] duration metric: took 12.379607ms WaitForService to wait for kubelet
	I0917 17:43:26.810230  300014 kubeadm.go:582] duration metric: took 27.949946033s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0917 17:43:26.810269  300014 node_conditions.go:102] verifying NodePressure condition ...
	I0917 17:43:26.813596  300014 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0917 17:43:26.813631  300014 node_conditions.go:123] node cpu capacity is 2
	I0917 17:43:26.813646  300014 node_conditions.go:105] duration metric: took 3.345959ms to run NodePressure ...
	I0917 17:43:26.813658  300014 start.go:241] waiting for startup goroutines ...
	I0917 17:43:26.876998  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:26.972037  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:27.117736  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:27.376207  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:27.471988  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:27.623819  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:27.876757  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:27.971798  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:28.118397  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:28.388233  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:28.474653  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:28.622222  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:28.889397  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:28.971328  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:29.117729  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:29.377266  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:29.471838  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:29.620848  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:29.877007  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:29.971889  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:30.118471  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:30.376750  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:30.472256  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:30.620000  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:30.878836  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:30.972761  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:31.117572  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:31.377208  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:31.472364  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:31.621621  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:31.877956  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:31.971536  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:32.117480  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:32.376590  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:32.471680  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:32.623774  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:32.876226  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:32.972372  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:33.117757  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:33.377417  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:33.478940  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:33.621148  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:33.877709  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:33.971494  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:34.116969  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:34.376405  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:34.471488  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:34.620664  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:34.877356  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:34.972494  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:35.118767  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:35.379003  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:35.471259  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:35.619507  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:35.876939  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:35.971940  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:36.118027  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:36.377651  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:36.472444  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:36.619011  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:36.876762  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:36.972643  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:37.117216  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:37.376476  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:37.472053  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:37.619889  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:37.876891  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:37.971883  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:38.118590  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0917 17:43:38.376109  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:38.472157  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:38.620028  300014 kapi.go:107] duration metric: took 29.006717216s to wait for kubernetes.io/minikube-addons=registry ...
	I0917 17:43:38.876430  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:38.971529  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:39.377258  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:39.475779  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:39.877842  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:39.972727  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:40.376998  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:40.481017  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:40.877438  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:40.972234  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:41.377547  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:41.472004  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:41.877548  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:41.974630  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:42.385545  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:42.471941  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:42.880457  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:42.972492  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:43.377026  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:43.471946  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:43.877274  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:43.972165  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:44.377310  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:44.472527  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:44.877207  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:44.971907  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:45.377164  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:45.471938  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:45.876874  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:45.971984  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:46.377647  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:46.474212  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:46.885832  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:46.971683  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:47.376915  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:47.472499  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:47.880171  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:47.975170  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:48.378095  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:48.472722  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:48.879068  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:48.972897  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:49.377747  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:49.472661  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:49.884207  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:49.984677  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:50.377697  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:50.478219  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:50.877089  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:50.971699  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:51.376913  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:51.471513  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:51.880534  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:51.984478  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:52.377478  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:52.472213  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:52.877453  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:52.972544  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:53.377809  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:53.471718  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:53.877952  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:53.977939  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:54.380570  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:54.473204  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:54.877205  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:54.973131  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:55.377616  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:55.478547  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:55.876990  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:55.977499  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:56.377256  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:56.471843  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:56.877320  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:56.972597  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:57.380312  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:57.472515  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:57.878118  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:57.972339  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:58.377474  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:58.472959  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:58.879054  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:58.977338  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:59.377090  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:59.473455  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:43:59.877527  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:43:59.971976  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:00.386697  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:44:00.503700  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:00.877591  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:44:00.971434  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:01.377809  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:44:01.471071  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:01.876917  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:44:01.971439  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:02.376503  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0917 17:44:02.471829  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:02.877755  300014 kapi.go:107] duration metric: took 52.505991574s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0917 17:44:02.972201  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:03.472119  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:03.974455  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:04.471725  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:04.972792  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:05.471058  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:05.971896  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:06.471691  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:06.971692  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:07.471689  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:07.972049  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:08.470971  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:08.971428  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:09.471901  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:09.972486  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:10.472412  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:10.972156  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:11.471875  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:11.972367  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:12.472176  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:12.974322  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:13.472825  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:13.972467  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:14.472011  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:14.975829  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:15.472572  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:15.972268  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:16.472170  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:16.972988  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:17.472079  300014 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0917 17:44:17.971569  300014 kapi.go:107] duration metric: took 1m10.504562171s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0917 17:44:34.712791  300014 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0917 17:44:34.712826  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:35.198656  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:35.698480  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:36.198027  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:36.696917  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:37.196344  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:37.696631  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:38.197542  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:38.697527  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:39.197409  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:39.696561  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:40.197926  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:40.696599  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:41.198092  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:41.696873  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:42.198303  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:42.696807  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:43.197367  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:43.697308  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:44.198018  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:44.697395  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:45.214262  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:45.697061  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:46.196908  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:46.697227  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:47.197126  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:47.697429  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:48.197672  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:48.697843  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:49.197493  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:49.697096  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:50.197231  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:50.697720  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:51.198633  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:51.697466  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:52.197458  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:52.696909  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:53.197003  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:53.697116  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:54.197417  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:54.697852  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:55.197055  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:55.697139  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:56.197321  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:56.696913  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:57.197160  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:57.696548  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:58.196877  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:58.697454  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:59.197787  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:44:59.697713  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:00.202549  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:00.702175  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:01.198824  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:01.697011  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:02.197647  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:02.697559  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:03.197799  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:03.698311  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:04.197373  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:04.698546  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:05.198116  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:05.696859  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:06.197778  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:06.697907  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:07.196992  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:07.697429  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:08.196712  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:08.697698  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:09.197613  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:09.697514  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:10.199008  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:10.697444  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:11.199301  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:11.700673  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:12.196642  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:12.697162  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:13.197378  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:13.697065  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:14.196898  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:14.701245  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:15.197416  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:15.697635  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:16.196913  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:16.696622  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:17.198282  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:17.696739  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:18.197864  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:18.696907  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:19.196662  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:19.697559  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:20.197121  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:20.697531  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:21.198499  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:21.697488  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:22.197017  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:22.696857  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:23.204910  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:23.696563  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:24.197666  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:24.697392  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:25.196740  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:25.697629  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:26.197139  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:26.697517  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:27.197270  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:27.697381  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:28.197190  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:28.697291  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:29.197557  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:29.696962  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:30.197843  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:30.697653  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:31.197129  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:31.696814  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:32.196510  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:32.697239  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:33.197111  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:33.697756  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:34.198004  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:34.697005  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:35.196847  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:35.696816  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:36.197822  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:36.698087  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:37.198006  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:37.697353  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:38.196996  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:38.696930  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:39.197393  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:39.696762  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:40.197704  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:40.697138  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:41.197303  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:41.696478  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:42.198166  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:42.698085  300014 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0917 17:45:43.197108  300014 kapi.go:107] duration metric: took 2m31.503741708s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0917 17:45:43.198858  300014 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-029117 cluster.
	I0917 17:45:43.200509  300014 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0917 17:45:43.202731  300014 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0917 17:45:43.204567  300014 out.go:177] * Enabled addons: nvidia-device-plugin, volcano, cloud-spanner, storage-provisioner, ingress-dns, metrics-server, inspektor-gadget, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0917 17:45:43.206217  300014 addons.go:510] duration metric: took 2m44.345570834s for enable addons: enabled=[nvidia-device-plugin volcano cloud-spanner storage-provisioner ingress-dns metrics-server inspektor-gadget yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0917 17:45:43.206282  300014 start.go:246] waiting for cluster config update ...
	I0917 17:45:43.206305  300014 start.go:255] writing updated cluster config ...
	I0917 17:45:43.206614  300014 ssh_runner.go:195] Run: rm -f paused
	I0917 17:45:43.550413  300014 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0917 17:45:43.552658  300014 out.go:177] * Done! kubectl is now configured to use "addons-029117" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                     ATTEMPT             POD ID              POD
	3a94bd857bc8c       4f725bf50aaa5       2 minutes ago       Exited              gadget                                   5                   787354c6406f7       gadget-vfsbg
	adfa9e9cc540e       6ef582f3ec844       3 minutes ago       Running             gcp-auth                                 0                   51781d55539e1       gcp-auth-89d5ffd79-lcsmd
	579495eff9c40       8b46b1cd48760       4 minutes ago       Running             admission                                0                   fffeb909eb2e1       volcano-admission-77d7d48b68-k4gxk
	3b423340cc8d7       289a818c8d9c5       4 minutes ago       Running             controller                               0                   4f414cf5dc9d5       ingress-nginx-controller-bc57996ff-g8jz6
	f1a38ff84b427       ee6d597e62dc8       5 minutes ago       Running             csi-snapshotter                          0                   349e5fca99a95       csi-hostpathplugin-ssw2p
	6c2acaf9eb6cd       642ded511e141       5 minutes ago       Running             csi-provisioner                          0                   349e5fca99a95       csi-hostpathplugin-ssw2p
	3e9322730cff7       922312104da8a       5 minutes ago       Running             liveness-probe                           0                   349e5fca99a95       csi-hostpathplugin-ssw2p
	db9b19cd7da02       08f6b2990811a       5 minutes ago       Running             hostpath                                 0                   349e5fca99a95       csi-hostpathplugin-ssw2p
	2c507983ece3a       9a80d518f102c       5 minutes ago       Running             csi-attacher                             0                   5448ad7fa15f6       csi-hostpath-attacher-0
	8352581002cd8       1505f556b3a7b       5 minutes ago       Running             volcano-controllers                      0                   82c0be089eaf6       volcano-controllers-56675bb4d5-df26f
	63a432cfe19f9       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   3f52ad9a7756b       snapshot-controller-56fcc65765-hlt8k
	862e2fe1b523e       420193b27261a       5 minutes ago       Exited              patch                                    0                   edb4df1e06e60       ingress-nginx-admission-patch-8msz9
	21ab990e7f50a       4d1e5c3e97420       5 minutes ago       Running             volume-snapshot-controller               0                   a985a6da65a12       snapshot-controller-56fcc65765-k6pgl
	787ddad56ea0b       d9c7ad4c226bf       5 minutes ago       Running             volcano-scheduler                        0                   78f243afd1328       volcano-scheduler-576bc46687-j2xtl
	ee13b5aec237e       5548a49bb60ba       5 minutes ago       Running             metrics-server                           0                   6429e979e8082       metrics-server-84c5f94fbc-smwfl
	92efee938b6ef       420193b27261a       5 minutes ago       Exited              create                                   0                   5cbf7916e53e6       ingress-nginx-admission-create-qhhqd
	a1485b9059ad0       77bdba588b953       5 minutes ago       Running             yakd                                     0                   021b74db155db       yakd-dashboard-67d98fc6b-b54cz
	53ea060bea327       3410e1561990a       5 minutes ago       Running             registry-proxy                           0                   9fc729f68e4e7       registry-proxy-h97gf
	63e1d8f1c7a06       0107d56dbc0be       5 minutes ago       Running             node-driver-registrar                    0                   349e5fca99a95       csi-hostpathplugin-ssw2p
	393cbb19566f8       c9cf76bb104e1       5 minutes ago       Running             registry                                 0                   b41e3ee387c50       registry-66c9cd494c-v75rj
	bcb350b448fa7       7ce2150c8929b       5 minutes ago       Running             local-path-provisioner                   0                   af1469b54f1b8       local-path-provisioner-86d989889c-qjjzh
	60b60f207a2cb       8be4bcf8ec607       5 minutes ago       Running             cloud-spanner-emulator                   0                   0f032daa38071       cloud-spanner-emulator-769b77f747-hcxxd
	c42a90285bb82       a9bac31a5be8d       5 minutes ago       Running             nvidia-device-plugin-ctr                 0                   7eba7dbf4adc1       nvidia-device-plugin-daemonset-892xz
	2929c90b1e0ba       487fa743e1e22       5 minutes ago       Running             csi-resizer                              0                   2b2b01469db22       csi-hostpath-resizer-0
	849eb706da5ae       1461903ec4fe9       5 minutes ago       Running             csi-external-health-monitor-controller   0                   349e5fca99a95       csi-hostpathplugin-ssw2p
	204a6aab85fbe       2f6c962e7b831       5 minutes ago       Running             coredns                                  0                   15b5fe7a157db       coredns-7c65d6cfc9-mnv4g
	e6b6925a39750       35508c2f890c4       5 minutes ago       Running             minikube-ingress-dns                     0                   de163ee504256       kube-ingress-dns-minikube
	50cca15c73aa4       ba04bb24b9575       5 minutes ago       Running             storage-provisioner                      0                   2fe864b0e74cb       storage-provisioner
	11d29ae5f0ac2       6a23fa8fd2b78       6 minutes ago       Running             kindnet-cni                              0                   f2f8edad51d2a       kindnet-fcdrs
	fcf7d8d2a61ea       24a140c548c07       6 minutes ago       Running             kube-proxy                               0                   8969bbb33adca       kube-proxy-9kqt4
	adcc4e765c3b2       279f381cb3736       6 minutes ago       Running             kube-controller-manager                  0                   5d968b291db39       kube-controller-manager-addons-029117
	e14581239b4f8       d3f53a98c0a9d       6 minutes ago       Running             kube-apiserver                           0                   7f0e2d7c1b43d       kube-apiserver-addons-029117
	45b13edcad25f       7f8aa378bb47d       6 minutes ago       Running             kube-scheduler                           0                   34aefca3c454a       kube-scheduler-addons-029117
	f937a6401089a       27e3830e14027       6 minutes ago       Running             etcd                                     0                   9b1fa35887054       etcd-addons-029117
	
	
	==> containerd <==
	Sep 17 17:45:54 addons-029117 containerd[817]: time="2024-09-17T17:45:54.164752207Z" level=info msg="RemovePodSandbox \"52e18f575c1a35ae9e5df093b9aad9b6b25a57f1a52403293ecc610bf6e441b1\" returns successfully"
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.049569849Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\""
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.171211137Z" level=info msg="ImageUpdate event name:\"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.172631831Z" level=info msg="stop pulling image ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec: active requests=0, bytes read=89"
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.176365047Z" level=info msg="Pulled image \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" with image id \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\", repo tag \"\", repo digest \"ghcr.io/inspektor-gadget/inspektor-gadget@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\", size \"72524105\" in 126.735334ms"
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.176411980Z" level=info msg="PullImage \"ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0@sha256:03e677e1cf9d2c9bea454e3dbcbcef20b3022e987534a2874eb1abc5bc3e73ec\" returns image reference \"sha256:4f725bf50aaa5c697fbb84c107e9c7a3766f0f85f514ffce712d03ee5f62e8dd\""
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.178894683Z" level=info msg="CreateContainer within sandbox \"787354c6406f7b5e161032b73893869893083ab81f9f79cf360d0669af0bdad0\" for container &ContainerMetadata{Name:gadget,Attempt:5,}"
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.201697284Z" level=info msg="CreateContainer within sandbox \"787354c6406f7b5e161032b73893869893083ab81f9f79cf360d0669af0bdad0\" for &ContainerMetadata{Name:gadget,Attempt:5,} returns container id \"3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0\""
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.202497666Z" level=info msg="StartContainer for \"3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0\""
	Sep 17 17:46:37 addons-029117 containerd[817]: time="2024-09-17T17:46:37.279360735Z" level=info msg="StartContainer for \"3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0\" returns successfully"
	Sep 17 17:46:38 addons-029117 containerd[817]: time="2024-09-17T17:46:38.875158173Z" level=info msg="shim disconnected" id=3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0 namespace=k8s.io
	Sep 17 17:46:38 addons-029117 containerd[817]: time="2024-09-17T17:46:38.875786001Z" level=warning msg="cleaning up after shim disconnected" id=3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0 namespace=k8s.io
	Sep 17 17:46:38 addons-029117 containerd[817]: time="2024-09-17T17:46:38.875884372Z" level=info msg="cleaning up dead shim" namespace=k8s.io
	Sep 17 17:46:39 addons-029117 containerd[817]: time="2024-09-17T17:46:39.230036878Z" level=info msg="RemoveContainer for \"540fbec4bba4fd9e8a552e4c4f19cc053f92a7f4d225164b67d980dfe82f4231\""
	Sep 17 17:46:39 addons-029117 containerd[817]: time="2024-09-17T17:46:39.246814139Z" level=info msg="RemoveContainer for \"540fbec4bba4fd9e8a552e4c4f19cc053f92a7f4d225164b67d980dfe82f4231\" returns successfully"
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.169121911Z" level=info msg="RemoveContainer for \"a850b298628fcdf6a5ffb64fbb1de83acce1e5d92539ede2bdb36ac56fcba0b1\""
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.175821488Z" level=info msg="RemoveContainer for \"a850b298628fcdf6a5ffb64fbb1de83acce1e5d92539ede2bdb36ac56fcba0b1\" returns successfully"
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.177966823Z" level=info msg="StopPodSandbox for \"f330a8cd0150dd5a62595e2234da187d8162fe92d6f490ba3f84ab910a14bc13\""
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.186744715Z" level=info msg="TearDown network for sandbox \"f330a8cd0150dd5a62595e2234da187d8162fe92d6f490ba3f84ab910a14bc13\" successfully"
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.186787250Z" level=info msg="StopPodSandbox for \"f330a8cd0150dd5a62595e2234da187d8162fe92d6f490ba3f84ab910a14bc13\" returns successfully"
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.187371370Z" level=info msg="RemovePodSandbox for \"f330a8cd0150dd5a62595e2234da187d8162fe92d6f490ba3f84ab910a14bc13\""
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.187598996Z" level=info msg="Forcibly stopping sandbox \"f330a8cd0150dd5a62595e2234da187d8162fe92d6f490ba3f84ab910a14bc13\""
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.195566693Z" level=info msg="TearDown network for sandbox \"f330a8cd0150dd5a62595e2234da187d8162fe92d6f490ba3f84ab910a14bc13\" successfully"
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.202505990Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f330a8cd0150dd5a62595e2234da187d8162fe92d6f490ba3f84ab910a14bc13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
	Sep 17 17:46:54 addons-029117 containerd[817]: time="2024-09-17T17:46:54.202636114Z" level=info msg="RemovePodSandbox \"f330a8cd0150dd5a62595e2234da187d8162fe92d6f490ba3f84ab910a14bc13\" returns successfully"
	
	
	==> coredns [204a6aab85fbed4ef8fdfba80109d64800667f0513540d391d213fbfa13c9037] <==
	[INFO] 10.244.0.8:35216 - 54643 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000084324s
	[INFO] 10.244.0.8:56803 - 42536 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002335724s
	[INFO] 10.244.0.8:56803 - 1322 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.002209456s
	[INFO] 10.244.0.8:55078 - 48329 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000244339s
	[INFO] 10.244.0.8:55078 - 29141 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000265582s
	[INFO] 10.244.0.8:60576 - 14140 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000111565s
	[INFO] 10.244.0.8:60576 - 45624 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00021371s
	[INFO] 10.244.0.8:47663 - 34300 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000078359s
	[INFO] 10.244.0.8:47663 - 58363 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000055466s
	[INFO] 10.244.0.8:60556 - 56914 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000056902s
	[INFO] 10.244.0.8:60556 - 6736 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000039343s
	[INFO] 10.244.0.8:39831 - 12020 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.010582907s
	[INFO] 10.244.0.8:39831 - 55434 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.010879473s
	[INFO] 10.244.0.8:56276 - 54768 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000057953s
	[INFO] 10.244.0.8:56276 - 49907 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000064139s
	[INFO] 10.244.0.24:60139 - 62737 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000192056s
	[INFO] 10.244.0.24:34083 - 45351 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000090798s
	[INFO] 10.244.0.24:48742 - 23534 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000144393s
	[INFO] 10.244.0.24:59418 - 59235 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109095s
	[INFO] 10.244.0.24:37699 - 30077 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000100209s
	[INFO] 10.244.0.24:46797 - 63755 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000120336s
	[INFO] 10.244.0.24:35396 - 55778 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002663945s
	[INFO] 10.244.0.24:52768 - 57486 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002592881s
	[INFO] 10.244.0.24:51779 - 22266 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002606485s
	[INFO] 10.244.0.24:38127 - 21634 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002602949s
	
	
	==> describe nodes <==
	Name:               addons-029117
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-029117
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=825de77780746e57a7948604e1eea9da920a46ce
	                    minikube.k8s.io/name=addons-029117
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_17T17_42_55_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-029117
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-029117"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 17 Sep 2024 17:42:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-029117
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 17 Sep 2024 17:49:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 17 Sep 2024 17:45:58 +0000   Tue, 17 Sep 2024 17:42:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 17 Sep 2024 17:45:58 +0000   Tue, 17 Sep 2024 17:42:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 17 Sep 2024 17:45:58 +0000   Tue, 17 Sep 2024 17:42:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 17 Sep 2024 17:45:58 +0000   Tue, 17 Sep 2024 17:42:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-029117
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022304Ki
	  pods:               110
	System Info:
	  Machine ID:                 d85182ab5dce4fd3b8aa785d7ef956fb
	  System UUID:                bb422dcb-6e2d-4323-b3f9-414297855990
	  Boot ID:                    01456781-933e-4f0b-87af-e69768f8a661
	  Kernel Version:             5.15.0-1070-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://1.7.22
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-769b77f747-hcxxd     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  gadget                      gadget-vfsbg                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  gcp-auth                    gcp-auth-89d5ffd79-lcsmd                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m28s
	  ingress-nginx               ingress-nginx-controller-bc57996ff-g8jz6    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         5m55s
	  kube-system                 coredns-7c65d6cfc9-mnv4g                    100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m3s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 csi-hostpathplugin-ssw2p                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m52s
	  kube-system                 etcd-addons-029117                          100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m8s
	  kube-system                 kindnet-fcdrs                               100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m4s
	  kube-system                 kube-apiserver-addons-029117                250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-controller-manager-addons-029117       200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m8s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-proxy-9kqt4                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m4s
	  kube-system                 kube-scheduler-addons-029117                100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m9s
	  kube-system                 metrics-server-84c5f94fbc-smwfl             100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         5m58s
	  kube-system                 nvidia-device-plugin-daemonset-892xz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m1s
	  kube-system                 registry-66c9cd494c-v75rj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m59s
	  kube-system                 registry-proxy-h97gf                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  kube-system                 snapshot-controller-56fcc65765-hlt8k        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 snapshot-controller-56fcc65765-k6pgl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m56s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  local-path-storage          local-path-provisioner-86d989889c-qjjzh     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  volcano-system              volcano-admission-77d7d48b68-k4gxk          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-controllers-56675bb4d5-df26f        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m54s
	  volcano-system              volcano-scheduler-576bc46687-j2xtl          0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m53s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-b54cz              0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m2s                   kube-proxy       
	  Normal   NodeHasSufficientMemory  6m16s (x8 over 6m16s)  kubelet          Node addons-029117 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m16s (x7 over 6m16s)  kubelet          Node addons-029117 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m16s (x7 over 6m16s)  kubelet          Node addons-029117 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 6m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m8s                   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeAllocatableEnforced  6m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  6m8s                   kubelet          Node addons-029117 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m8s                   kubelet          Node addons-029117 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m8s                   kubelet          Node addons-029117 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m4s                   node-controller  Node addons-029117 event: Registered Node addons-029117 in Controller
	
	
	==> dmesg <==
	[Sep17 16:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014777] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.422846] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.775531] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +5.965603] kauditd_printk_skb: 36 callbacks suppressed
	[Sep17 16:44] hrtimer: interrupt took 17899979 ns
	[Sep17 17:10] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep17 17:33] overlayfs: upperdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	[  +0.000009] overlayfs: workdir is in-use as upperdir/workdir of another mount, accessing files from both mounts will result in undefined behavior.
	
	
	==> etcd [f937a6401089a7f0e44ad37397cb13398c591c660798965f2a7627a81ec58215] <==
	{"level":"info","ts":"2024-09-17T17:42:46.907567Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-09-17T17:42:46.907793Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T17:42:46.907980Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-09-17T17:42:46.911343Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-09-17T17:42:46.911599Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-09-17T17:42:47.375701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-09-17T17:42:47.375927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-09-17T17:42:47.376067Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-09-17T17:42:47.376179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-09-17T17:42:47.376255Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T17:42:47.376348Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-09-17T17:42:47.376417Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-09-17T17:42:47.379849Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:addons-029117 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-09-17T17:42:47.380023Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:42:47.380419Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:42:47.380626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-09-17T17:42:47.381699Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:42:47.386638Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:42:47.389239Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-09-17T17:42:47.389394Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:42:47.389433Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-09-17T17:42:47.387389Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-09-17T17:42:47.389464Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-09-17T17:42:47.388133Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-09-17T17:42:47.397537Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> gcp-auth [adfa9e9cc540e5693f592eb539ad875248282732f61747b369fdafa04c5b72bb] <==
	2024/09/17 17:45:42 GCP Auth Webhook started!
	2024/09/17 17:46:00 Ready to marshal response ...
	2024/09/17 17:46:00 Ready to write response ...
	2024/09/17 17:46:01 Ready to marshal response ...
	2024/09/17 17:46:01 Ready to write response ...
	
	
	==> kernel <==
	 17:49:02 up  1:31,  0 users,  load average: 0.11, 1.19, 2.11
	Linux addons-029117 5.15.0-1070-aws #76~20.04.1-Ubuntu SMP Mon Sep 2 12:20:48 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [11d29ae5f0ac2b518d822e4997eac9f38c14cd7b4640d44da3f431f630057bd8] <==
	I0917 17:47:00.697312       1 main.go:299] handling current node
	I0917 17:47:10.697296       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:47:10.697383       1 main.go:299] handling current node
	I0917 17:47:20.705949       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:47:20.705983       1 main.go:299] handling current node
	I0917 17:47:30.703813       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:47:30.703856       1 main.go:299] handling current node
	I0917 17:47:40.704754       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:47:40.704791       1 main.go:299] handling current node
	I0917 17:47:50.703726       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:47:50.703763       1 main.go:299] handling current node
	I0917 17:48:00.697064       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:48:00.697180       1 main.go:299] handling current node
	I0917 17:48:10.703541       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:48:10.703629       1 main.go:299] handling current node
	I0917 17:48:20.704018       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:48:20.704065       1 main.go:299] handling current node
	I0917 17:48:30.703417       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:48:30.703453       1 main.go:299] handling current node
	I0917 17:48:40.704272       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:48:40.704306       1 main.go:299] handling current node
	I0917 17:48:50.705796       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:48:50.705830       1 main.go:299] handling current node
	I0917 17:49:00.696482       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0917 17:49:00.696518       1 main.go:299] handling current node
	
	
	==> kube-apiserver [e14581239b4f8ec3d5d5d224b92702cfa0d54209f6a8d9e5efe845c39ccdace7] <==
	W0917 17:44:13.452186       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:14.491898       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:14.647276       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.93.130:443: connect: connection refused
	E0917 17:44:14.647316       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.93.130:443: connect: connection refused" logger="UnhandledError"
	W0917 17:44:14.648945       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:14.716015       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.93.130:443: connect: connection refused
	E0917 17:44:14.716055       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.93.130:443: connect: connection refused" logger="UnhandledError"
	W0917 17:44:14.717667       1 dispatcher.go:225] Failed calling webhook, failing closed mutatepod.volcano.sh: failed calling webhook "mutatepod.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/pods/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:15.586203       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:16.606219       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:17.700981       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:18.744619       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:19.752621       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:20.841712       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:21.867575       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:22.881454       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:23.959378       1 dispatcher.go:225] Failed calling webhook, failing closed mutatequeue.volcano.sh: failed calling webhook "mutatequeue.volcano.sh": failed to call webhook: Post "https://volcano-admission-service.volcano-system.svc:443/queues/mutate?timeout=10s": dial tcp 10.105.252.29:443: connect: connection refused
	W0917 17:44:34.649038       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.93.130:443: connect: connection refused
	E0917 17:44:34.649078       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.93.130:443: connect: connection refused" logger="UnhandledError"
	W0917 17:45:14.657936       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.93.130:443: connect: connection refused
	E0917 17:45:14.658038       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.93.130:443: connect: connection refused" logger="UnhandledError"
	W0917 17:45:14.723777       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.93.130:443: connect: connection refused
	E0917 17:45:14.723818       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.93.130:443: connect: connection refused" logger="UnhandledError"
	I0917 17:46:00.344528       1 controller.go:615] quota admission added evaluator for: jobs.batch.volcano.sh
	I0917 17:46:00.414422       1 controller.go:615] quota admission added evaluator for: podgroups.scheduling.volcano.sh
	
	
	==> kube-controller-manager [adcc4e765c3b23e534e146f0de383da9dd0eee5da448d49a98cf0866c6bd69a9] <==
	I0917 17:45:14.677917       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0917 17:45:14.683928       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0917 17:45:14.696144       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0917 17:45:14.733071       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:14.750040       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:14.750178       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:14.761349       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:15.971329       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0917 17:45:15.985093       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:17.101664       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:17.129823       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0917 17:45:18.109640       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:18.120331       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:18.126276       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="1s"
	I0917 17:45:18.136879       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0917 17:45:18.144917       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0917 17:45:18.151398       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="1s"
	I0917 17:45:43.083516       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="12.757817ms"
	I0917 17:45:43.083905       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-89d5ffd79" duration="49.329µs"
	I0917 17:45:48.059637       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0917 17:45:48.063426       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0917 17:45:48.112794       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch" delay="0s"
	I0917 17:45:48.114734       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create" delay="0s"
	I0917 17:45:58.138252       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-029117"
	I0917 17:45:59.824098       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="volcano-system/volcano-admission-init" delay="0s"
	
	
	==> kube-proxy [fcf7d8d2a61ead752be43f4689e195f3e0bfa9ce3d636872cb686ed958f943af] <==
	I0917 17:43:00.079845       1 server_linux.go:66] "Using iptables proxy"
	I0917 17:43:00.275098       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0917 17:43:00.275207       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0917 17:43:00.384688       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0917 17:43:00.384762       1 server_linux.go:169] "Using iptables Proxier"
	I0917 17:43:00.396923       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0917 17:43:00.406327       1 server.go:483] "Version info" version="v1.31.1"
	I0917 17:43:00.406366       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0917 17:43:00.408857       1 config.go:199] "Starting service config controller"
	I0917 17:43:00.408891       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0917 17:43:00.408922       1 config.go:105] "Starting endpoint slice config controller"
	I0917 17:43:00.408929       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0917 17:43:00.409555       1 config.go:328] "Starting node config controller"
	I0917 17:43:00.409568       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0917 17:43:00.510017       1 shared_informer.go:320] Caches are synced for node config
	I0917 17:43:00.510053       1 shared_informer.go:320] Caches are synced for service config
	I0917 17:43:00.510059       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [45b13edcad25f15717435584c20bd1bbd419a9d363d48e62efba1f0dfb221b4e] <==
	W0917 17:42:51.459240       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:42:51.459262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:51.459761       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 17:42:51.459789       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 17:42:51.460018       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0917 17:42:51.460041       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:52.280605       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0917 17:42:52.280647       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:52.287360       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0917 17:42:52.287403       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:52.305242       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0917 17:42:52.305284       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:52.351232       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0917 17:42:52.351286       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:52.365233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0917 17:42:52.365410       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:52.451957       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0917 17:42:52.452216       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:52.513594       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0917 17:42:52.513811       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0917 17:42:52.585458       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0917 17:42:52.585682       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0917 17:42:52.679179       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0917 17:42:52.679222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	I0917 17:42:55.746599       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 17 17:47:09 addons-029117 kubelet[1514]: I0917 17:47:09.048514    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:47:09 addons-029117 kubelet[1514]: E0917 17:47:09.048735    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:47:23 addons-029117 kubelet[1514]: I0917 17:47:23.049008    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:47:23 addons-029117 kubelet[1514]: E0917 17:47:23.049243    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:47:31 addons-029117 kubelet[1514]: I0917 17:47:31.048359    1514 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-v75rj" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 17:47:34 addons-029117 kubelet[1514]: I0917 17:47:34.055515    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:47:34 addons-029117 kubelet[1514]: E0917 17:47:34.056188    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:47:43 addons-029117 kubelet[1514]: I0917 17:47:43.048021    1514 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-h97gf" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 17:47:47 addons-029117 kubelet[1514]: I0917 17:47:47.048182    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:47:47 addons-029117 kubelet[1514]: E0917 17:47:47.048397    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:47:59 addons-029117 kubelet[1514]: I0917 17:47:59.048206    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:47:59 addons-029117 kubelet[1514]: E0917 17:47:59.048410    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:48:10 addons-029117 kubelet[1514]: I0917 17:48:10.048709    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:48:10 addons-029117 kubelet[1514]: E0917 17:48:10.049464    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:48:22 addons-029117 kubelet[1514]: I0917 17:48:22.048555    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:48:22 addons-029117 kubelet[1514]: E0917 17:48:22.049226    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:48:33 addons-029117 kubelet[1514]: I0917 17:48:33.048275    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:48:33 addons-029117 kubelet[1514]: E0917 17:48:33.048539    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:48:36 addons-029117 kubelet[1514]: I0917 17:48:36.048949    1514 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-892xz" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 17:48:47 addons-029117 kubelet[1514]: I0917 17:48:47.048773    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:48:47 addons-029117 kubelet[1514]: E0917 17:48:47.049049    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	Sep 17 17:48:52 addons-029117 kubelet[1514]: I0917 17:48:52.049408    1514 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-h97gf" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 17:49:00 addons-029117 kubelet[1514]: I0917 17:49:00.049844    1514 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-66c9cd494c-v75rj" secret="" err="secret \"gcp-auth\" not found"
	Sep 17 17:49:02 addons-029117 kubelet[1514]: I0917 17:49:02.049614    1514 scope.go:117] "RemoveContainer" containerID="3a94bd857bc8c889e1c337eb7c1ac50947841dffaa7794aa56506f6500a2edd0"
	Sep 17 17:49:02 addons-029117 kubelet[1514]: E0917 17:49:02.049795    1514 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=gadget pod=gadget-vfsbg_gadget(738585f5-cf0b-4c58-b7d8-2fdab3e54f78)\"" pod="gadget/gadget-vfsbg" podUID="738585f5-cf0b-4c58-b7d8-2fdab3e54f78"
	
	
	==> storage-provisioner [50cca15c73aa440b948ff293c8d4d3822c1341bdaa2be1df294f0b3d95347a31] <==
	I0917 17:43:05.269672       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0917 17:43:05.282471       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0917 17:43:05.282544       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0917 17:43:05.294849       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0917 17:43:05.295957       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-029117_3a72d13f-8aca-4d2e-977d-2dea14e450b1!
	I0917 17:43:05.297754       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f25d191a-6bec-4627-b036-b55134516c40", APIVersion:"v1", ResourceVersion:"590", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-029117_3a72d13f-8aca-4d2e-977d-2dea14e450b1 became leader
	I0917 17:43:05.396090       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-029117_3a72d13f-8aca-4d2e-977d-2dea14e450b1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-029117 -n addons-029117
helpers_test.go:261: (dbg) Run:  kubectl --context addons-029117 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-qhhqd ingress-nginx-admission-patch-8msz9 test-job-nginx-0
helpers_test.go:274: ======> post-mortem[TestAddons/serial/Volcano]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-029117 describe pod ingress-nginx-admission-create-qhhqd ingress-nginx-admission-patch-8msz9 test-job-nginx-0
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-029117 describe pod ingress-nginx-admission-create-qhhqd ingress-nginx-admission-patch-8msz9 test-job-nginx-0: exit status 1 (122.294338ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-qhhqd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-8msz9" not found
	Error from server (NotFound): pods "test-job-nginx-0" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-029117 describe pod ingress-nginx-admission-create-qhhqd ingress-nginx-admission-patch-8msz9 test-job-nginx-0: exit status 1
--- FAIL: TestAddons/serial/Volcano (200.24s)

                                                
                                    

Test pass (299/328)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 13.36
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.21
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 6.23
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.07
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.77
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
27 TestAddons/Setup 223.07
31 TestAddons/serial/GCPAuth/Namespaces 0.17
33 TestAddons/parallel/Registry 15
34 TestAddons/parallel/Ingress 19.86
35 TestAddons/parallel/InspektorGadget 10.97
36 TestAddons/parallel/MetricsServer 6.83
39 TestAddons/parallel/CSI 58.6
40 TestAddons/parallel/Headlamp 16.05
41 TestAddons/parallel/CloudSpanner 6.74
42 TestAddons/parallel/LocalPath 8.82
43 TestAddons/parallel/NvidiaDevicePlugin 6.68
44 TestAddons/parallel/Yakd 11.91
45 TestAddons/StoppedEnableDisable 12.31
46 TestCertOptions 35.73
47 TestCertExpiration 227.12
49 TestForceSystemdFlag 33.22
50 TestForceSystemdEnv 40.37
51 TestDockerEnvContainerd 43.88
56 TestErrorSpam/setup 28.07
57 TestErrorSpam/start 0.77
58 TestErrorSpam/status 1.16
59 TestErrorSpam/pause 1.84
60 TestErrorSpam/unpause 2.1
61 TestErrorSpam/stop 1.44
64 TestFunctional/serial/CopySyncFile 0
65 TestFunctional/serial/StartWithProxy 54.3
66 TestFunctional/serial/AuditLog 0
67 TestFunctional/serial/SoftStart 6.43
68 TestFunctional/serial/KubeContext 0.07
69 TestFunctional/serial/KubectlGetPods 0.12
72 TestFunctional/serial/CacheCmd/cache/add_remote 4.25
73 TestFunctional/serial/CacheCmd/cache/add_local 1.34
74 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
75 TestFunctional/serial/CacheCmd/cache/list 0.05
76 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.32
77 TestFunctional/serial/CacheCmd/cache/cache_reload 2.18
78 TestFunctional/serial/CacheCmd/cache/delete 0.12
79 TestFunctional/serial/MinikubeKubectlCmd 0.14
80 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
81 TestFunctional/serial/ExtraConfig 42.26
82 TestFunctional/serial/ComponentHealth 0.11
83 TestFunctional/serial/LogsCmd 1.71
84 TestFunctional/serial/LogsFileCmd 1.86
85 TestFunctional/serial/InvalidService 4.38
87 TestFunctional/parallel/ConfigCmd 0.45
88 TestFunctional/parallel/DashboardCmd 8.42
89 TestFunctional/parallel/DryRun 0.47
90 TestFunctional/parallel/InternationalLanguage 0.24
91 TestFunctional/parallel/StatusCmd 1.06
95 TestFunctional/parallel/ServiceCmdConnect 7.72
96 TestFunctional/parallel/AddonsCmd 0.17
97 TestFunctional/parallel/PersistentVolumeClaim 25.51
99 TestFunctional/parallel/SSHCmd 0.55
100 TestFunctional/parallel/CpCmd 2.06
102 TestFunctional/parallel/FileSync 0.36
103 TestFunctional/parallel/CertSync 2.25
107 TestFunctional/parallel/NodeLabels 0.13
109 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
111 TestFunctional/parallel/License 0.31
112 TestFunctional/parallel/Version/short 0.09
113 TestFunctional/parallel/Version/components 1.25
114 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.28
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.27
118 TestFunctional/parallel/ImageCommands/ImageBuild 4.27
119 TestFunctional/parallel/ImageCommands/Setup 0.75
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.25
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.5
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.48
125 TestFunctional/parallel/ServiceCmd/DeployApp 10.28
126 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.7
127 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.45
128 TestFunctional/parallel/ImageCommands/ImageRemove 0.62
129 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.68
130 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.63
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.33
136 TestFunctional/parallel/ServiceCmd/List 0.33
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.35
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.42
139 TestFunctional/parallel/ServiceCmd/Format 0.37
140 TestFunctional/parallel/ServiceCmd/URL 0.4
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.44
148 TestFunctional/parallel/ProfileCmd/profile_list 0.39
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
150 TestFunctional/parallel/MountCmd/any-port 9.12
151 TestFunctional/parallel/MountCmd/specific-port 1.81
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.8
153 TestFunctional/delete_echo-server_images 0.07
154 TestFunctional/delete_my-image_image 0.04
155 TestFunctional/delete_minikube_cached_images 0.02
159 TestMultiControlPlane/serial/StartCluster 138.67
160 TestMultiControlPlane/serial/DeployApp 30.45
161 TestMultiControlPlane/serial/PingHostFromPods 1.62
162 TestMultiControlPlane/serial/AddWorkerNode 21.91
163 TestMultiControlPlane/serial/NodeLabels 0.11
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.78
165 TestMultiControlPlane/serial/CopyFile 20.14
166 TestMultiControlPlane/serial/StopSecondaryNode 12.87
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
168 TestMultiControlPlane/serial/RestartSecondaryNode 20.4
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.8
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 122.73
171 TestMultiControlPlane/serial/DeleteSecondaryNode 10.64
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.53
173 TestMultiControlPlane/serial/StopCluster 35.99
174 TestMultiControlPlane/serial/RestartCluster 79.88
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.55
176 TestMultiControlPlane/serial/AddSecondaryNode 42.47
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.83
181 TestJSONOutput/start/Command 80.36
182 TestJSONOutput/start/Audit 0
184 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
185 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
187 TestJSONOutput/pause/Command 0.99
188 TestJSONOutput/pause/Audit 0
190 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/unpause/Command 0.65
194 TestJSONOutput/unpause/Audit 0
196 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/stop/Command 5.79
200 TestJSONOutput/stop/Audit 0
202 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
204 TestErrorJSONOutput 0.22
206 TestKicCustomNetwork/create_custom_network 40.53
207 TestKicCustomNetwork/use_default_bridge_network 33.43
208 TestKicExistingNetwork 32.25
209 TestKicCustomSubnet 34.3
210 TestKicStaticIP 32.26
211 TestMainNoArgs 0.06
212 TestMinikubeProfile 71.27
215 TestMountStart/serial/StartWithMountFirst 7.19
216 TestMountStart/serial/VerifyMountFirst 0.28
217 TestMountStart/serial/StartWithMountSecond 6.05
218 TestMountStart/serial/VerifyMountSecond 0.28
219 TestMountStart/serial/DeleteFirst 1.62
220 TestMountStart/serial/VerifyMountPostDelete 0.28
221 TestMountStart/serial/Stop 1.2
222 TestMountStart/serial/RestartStopped 7.86
223 TestMountStart/serial/VerifyMountPostStop 0.27
226 TestMultiNode/serial/FreshStart2Nodes 108.45
227 TestMultiNode/serial/DeployApp2Nodes 19.52
228 TestMultiNode/serial/PingHostFrom2Pods 1
229 TestMultiNode/serial/AddNode 16.85
230 TestMultiNode/serial/MultiNodeLabels 0.1
231 TestMultiNode/serial/ProfileList 0.34
232 TestMultiNode/serial/CopyFile 10.37
233 TestMultiNode/serial/StopNode 2.28
234 TestMultiNode/serial/StartAfterStop 9.94
235 TestMultiNode/serial/RestartKeepsNodes 102.13
236 TestMultiNode/serial/DeleteNode 5.73
237 TestMultiNode/serial/StopMultiNode 24.01
238 TestMultiNode/serial/RestartMultiNode 49.54
239 TestMultiNode/serial/ValidateNameConflict 34.68
244 TestPreload 121.29
246 TestScheduledStopUnix 108.51
249 TestInsufficientStorage 10.4
250 TestRunningBinaryUpgrade 81.98
252 TestKubernetesUpgrade 352.86
253 TestMissingContainerUpgrade 177.22
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
256 TestNoKubernetes/serial/StartWithK8s 38.9
257 TestNoKubernetes/serial/StartWithStopK8s 17.95
258 TestNoKubernetes/serial/Start 8.1
259 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
260 TestNoKubernetes/serial/ProfileList 0.89
261 TestNoKubernetes/serial/Stop 1.22
262 TestNoKubernetes/serial/StartNoArgs 7.77
263 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
264 TestStoppedBinaryUpgrade/Setup 0.73
265 TestStoppedBinaryUpgrade/Upgrade 121.23
266 TestStoppedBinaryUpgrade/MinikubeLogs 1.15
275 TestPause/serial/Start 92.73
283 TestNetworkPlugins/group/false 3.86
287 TestPause/serial/SecondStartNoReconfiguration 7.71
288 TestPause/serial/Pause 0.98
289 TestPause/serial/VerifyStatus 0.45
290 TestPause/serial/Unpause 0.91
291 TestPause/serial/PauseAgain 1.16
292 TestPause/serial/DeletePaused 2.75
293 TestPause/serial/VerifyDeletedResources 0.47
295 TestStartStop/group/old-k8s-version/serial/FirstStart 147.89
296 TestStartStop/group/old-k8s-version/serial/DeployApp 9.68
297 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.15
298 TestStartStop/group/old-k8s-version/serial/Stop 12.76
300 TestStartStop/group/no-preload/serial/FirstStart 76.5
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.34
302 TestStartStop/group/old-k8s-version/serial/SecondStart 308.95
303 TestStartStop/group/no-preload/serial/DeployApp 9.36
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.24
305 TestStartStop/group/no-preload/serial/Stop 12.17
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
307 TestStartStop/group/no-preload/serial/SecondStart 268.35
308 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
309 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.11
310 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
311 TestStartStop/group/old-k8s-version/serial/Pause 3.08
313 TestStartStop/group/embed-certs/serial/FirstStart 93.32
314 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
315 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
316 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
317 TestStartStop/group/no-preload/serial/Pause 3.19
319 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 82.27
320 TestStartStop/group/embed-certs/serial/DeployApp 9.36
321 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.22
322 TestStartStop/group/embed-certs/serial/Stop 12.09
323 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
324 TestStartStop/group/embed-certs/serial/SecondStart 281.26
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.4
326 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
327 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.12
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
329 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 269.95
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.13
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
333 TestStartStop/group/embed-certs/serial/Pause 3.2
335 TestStartStop/group/newest-cni/serial/FirstStart 37.51
336 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
337 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.12
338 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
339 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.02
340 TestNetworkPlugins/group/auto/Start 99.27
341 TestStartStop/group/newest-cni/serial/DeployApp 0
342 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.74
343 TestStartStop/group/newest-cni/serial/Stop 3.37
344 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.25
345 TestStartStop/group/newest-cni/serial/SecondStart 24.21
346 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
347 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
348 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
349 TestStartStop/group/newest-cni/serial/Pause 3.95
350 TestNetworkPlugins/group/kindnet/Start 53.86
351 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
353 TestNetworkPlugins/group/kindnet/NetCatPod 9.29
354 TestNetworkPlugins/group/auto/KubeletFlags 0.31
355 TestNetworkPlugins/group/auto/NetCatPod 8.37
356 TestNetworkPlugins/group/kindnet/DNS 0.64
357 TestNetworkPlugins/group/auto/DNS 0.59
358 TestNetworkPlugins/group/kindnet/Localhost 0.22
359 TestNetworkPlugins/group/auto/Localhost 0.22
360 TestNetworkPlugins/group/kindnet/HairPin 0.24
361 TestNetworkPlugins/group/auto/HairPin 0.21
362 TestNetworkPlugins/group/calico/Start 70.92
363 TestNetworkPlugins/group/custom-flannel/Start 57.66
364 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.4
365 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.39
366 TestNetworkPlugins/group/custom-flannel/DNS 0.22
367 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
368 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
369 TestNetworkPlugins/group/calico/ControllerPod 6.01
370 TestNetworkPlugins/group/calico/KubeletFlags 0.41
371 TestNetworkPlugins/group/calico/NetCatPod 10.42
372 TestNetworkPlugins/group/calico/DNS 0.27
373 TestNetworkPlugins/group/calico/Localhost 0.22
374 TestNetworkPlugins/group/calico/HairPin 0.22
375 TestNetworkPlugins/group/enable-default-cni/Start 53.57
376 TestNetworkPlugins/group/flannel/Start 54.51
377 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
378 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.22
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.2
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
382 TestNetworkPlugins/group/flannel/ControllerPod 6.01
383 TestNetworkPlugins/group/flannel/KubeletFlags 0.36
384 TestNetworkPlugins/group/flannel/NetCatPod 10.36
385 TestNetworkPlugins/group/bridge/Start 77.62
386 TestNetworkPlugins/group/flannel/DNS 0.23
387 TestNetworkPlugins/group/flannel/Localhost 0.17
388 TestNetworkPlugins/group/flannel/HairPin 0.21
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
390 TestNetworkPlugins/group/bridge/NetCatPod 10.27
391 TestNetworkPlugins/group/bridge/DNS 0.17
392 TestNetworkPlugins/group/bridge/Localhost 0.15
393 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (13.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-114122 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-114122 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (13.359141392s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (13.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-114122
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-114122: exit status 85 (65.273399ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-114122 | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC |          |
	|         | -p download-only-114122        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=containerd |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:41:38
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:41:38.374153  299261 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:41:38.374508  299261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:41:38.374520  299261 out.go:358] Setting ErrFile to fd 2...
	I0917 17:41:38.374525  299261 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:41:38.374800  299261 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	W0917 17:41:38.374949  299261 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19662-293874/.minikube/config/config.json: open /home/jenkins/minikube-integration/19662-293874/.minikube/config/config.json: no such file or directory
	I0917 17:41:38.375354  299261 out.go:352] Setting JSON to true
	I0917 17:41:38.376200  299261 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5044,"bootTime":1726589854,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0917 17:41:38.376271  299261 start.go:139] virtualization:  
	I0917 17:41:38.379419  299261 out.go:97] [download-only-114122] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W0917 17:41:38.379640  299261 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball: no such file or directory
	I0917 17:41:38.379699  299261 notify.go:220] Checking for updates...
	I0917 17:41:38.381223  299261 out.go:169] MINIKUBE_LOCATION=19662
	I0917 17:41:38.383239  299261 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:41:38.384917  299261 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	I0917 17:41:38.386846  299261 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	I0917 17:41:38.388658  299261 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0917 17:41:38.392187  299261 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 17:41:38.392487  299261 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:41:38.421522  299261 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:41:38.421629  299261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:41:38.483049  299261 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-17 17:41:38.473597782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:41:38.483168  299261 docker.go:318] overlay module found
	I0917 17:41:38.485470  299261 out.go:97] Using the docker driver based on user configuration
	I0917 17:41:38.485502  299261 start.go:297] selected driver: docker
	I0917 17:41:38.485510  299261 start.go:901] validating driver "docker" against <nil>
	I0917 17:41:38.485628  299261 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:41:38.536200  299261 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-17 17:41:38.526336244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:41:38.536437  299261 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 17:41:38.536734  299261 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0917 17:41:38.536885  299261 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 17:41:38.539154  299261 out.go:169] Using Docker driver with root privileges
	I0917 17:41:38.540795  299261 cni.go:84] Creating CNI manager for ""
	I0917 17:41:38.540864  299261 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 17:41:38.540880  299261 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 17:41:38.540952  299261 start.go:340] cluster config:
	{Name:download-only-114122 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-114122 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:41:38.542795  299261 out.go:97] Starting "download-only-114122" primary control-plane node in "download-only-114122" cluster
	I0917 17:41:38.542828  299261 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0917 17:41:38.544872  299261 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0917 17:41:38.544902  299261 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0917 17:41:38.545088  299261 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 17:41:38.560752  299261 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 17:41:38.560951  299261 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 17:41:38.561054  299261 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 17:41:38.605702  299261 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0917 17:41:38.605737  299261 cache.go:56] Caching tarball of preloaded images
	I0917 17:41:38.605895  299261 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0917 17:41:38.608084  299261 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0917 17:41:38.608112  299261 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0917 17:41:38.697135  299261 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:7e3d48ccb9f143791669d02e14ce1643 -> /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4
	I0917 17:41:43.314665  299261 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0917 17:41:43.314767  299261 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-containerd-overlay2-arm64.tar.lz4 ...
	I0917 17:41:44.445534  299261 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on containerd
	I0917 17:41:44.446021  299261 profile.go:143] Saving config to /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/download-only-114122/config.json ...
	I0917 17:41:44.446058  299261 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/download-only-114122/config.json: {Name:mk5e5d4d052ae20503a221504f59d92e3eacaf52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0917 17:41:44.446320  299261 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime containerd
	I0917 17:41:44.446540  299261 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/19662-293874/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-114122 host does not exist
	  To start a cluster, run: "minikube start -p download-only-114122"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-114122
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (6.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-798377 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-798377 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (6.230223182s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (6.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-798377
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-798377: exit status 85 (71.151922ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-114122 | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC |                     |
	|         | -p download-only-114122        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| delete  | -p download-only-114122        | download-only-114122 | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC | 17 Sep 24 17:41 UTC |
	| start   | -o=json --download-only        | download-only-798377 | jenkins | v1.34.0 | 17 Sep 24 17:41 UTC |                     |
	|         | -p download-only-798377        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=containerd |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/17 17:41:52
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.0 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0917 17:41:52.142853  299461 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:41:52.142968  299461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:41:52.142991  299461 out.go:358] Setting ErrFile to fd 2...
	I0917 17:41:52.142997  299461 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:41:52.143248  299461 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 17:41:52.143643  299461 out.go:352] Setting JSON to true
	I0917 17:41:52.144521  299461 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5058,"bootTime":1726589854,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0917 17:41:52.144603  299461 start.go:139] virtualization:  
	I0917 17:41:52.147141  299461 out.go:97] [download-only-798377] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0917 17:41:52.147400  299461 notify.go:220] Checking for updates...
	I0917 17:41:52.149454  299461 out.go:169] MINIKUBE_LOCATION=19662
	I0917 17:41:52.151770  299461 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:41:52.153799  299461 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	I0917 17:41:52.155355  299461 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	I0917 17:41:52.157141  299461 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W0917 17:41:52.160955  299461 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0917 17:41:52.161206  299461 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:41:52.189285  299461 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:41:52.189417  299461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:41:52.256772  299461 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 17:41:52.247323293 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:41:52.256883  299461 docker.go:318] overlay module found
	I0917 17:41:52.258705  299461 out.go:97] Using the docker driver based on user configuration
	I0917 17:41:52.258736  299461 start.go:297] selected driver: docker
	I0917 17:41:52.258744  299461 start.go:901] validating driver "docker" against <nil>
	I0917 17:41:52.258856  299461 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:41:52.313382  299461 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:44 SystemTime:2024-09-17 17:41:52.304283846 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:41:52.313599  299461 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0917 17:41:52.313886  299461 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I0917 17:41:52.314042  299461 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0917 17:41:52.316244  299461 out.go:169] Using Docker driver with root privileges
	I0917 17:41:52.319237  299461 cni.go:84] Creating CNI manager for ""
	I0917 17:41:52.319301  299461 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I0917 17:41:52.319317  299461 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0917 17:41:52.319399  299461 start.go:340] cluster config:
	{Name:download-only-798377 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-798377 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:41:52.321371  299461 out.go:97] Starting "download-only-798377" primary control-plane node in "download-only-798377" cluster
	I0917 17:41:52.321389  299461 cache.go:121] Beginning downloading kic base image for docker with containerd
	I0917 17:41:52.323774  299461 out.go:97] Pulling base image v0.0.45-1726589491-19662 ...
	I0917 17:41:52.323799  299461 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0917 17:41:52.323959  299461 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local docker daemon
	I0917 17:41:52.338891  299461 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 to local cache
	I0917 17:41:52.338997  299461 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory
	I0917 17:41:52.339020  299461 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 in local cache directory, skipping pull
	I0917 17:41:52.339029  299461 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 exists in cache, skipping pull
	I0917 17:41:52.339037  299461 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 as a tarball
	I0917 17:41:52.382865  299461 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0917 17:41:52.382890  299461 cache.go:56] Caching tarball of preloaded images
	I0917 17:41:52.383053  299461 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime containerd
	I0917 17:41:52.385126  299461 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0917 17:41:52.385150  299461 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0917 17:41:52.483804  299461 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:b0cdb5ac9449e6e1388c2153988f76f5 -> /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4
	I0917 17:41:56.681077  299461 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	I0917 17:41:56.681195  299461 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19662-293874/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-containerd-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-798377 host does not exist
	  To start a cluster, run: "minikube start -p download-only-798377"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-798377
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.77s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-188320 --alsologtostderr --binary-mirror http://127.0.0.1:34225 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-188320" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-188320
--- PASS: TestBinaryMirror (0.77s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1037: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-029117
addons_test.go:1037: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-029117: exit status 85 (170.523415ms)

                                                
                                                
-- stdout --
	* Profile "addons-029117" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-029117"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1048: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-029117
addons_test.go:1048: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-029117: exit status 85 (185.499812ms)

                                                
                                                
-- stdout --
	* Profile "addons-029117" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-029117"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (223.07s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-arm64 start -p addons-029117 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns
addons_test.go:110: (dbg) Done: out/minikube-linux-arm64 start -p addons-029117 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns: (3m43.072015024s)
--- PASS: TestAddons/Setup (223.07s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:656: (dbg) Run:  kubectl --context addons-029117 create ns new-namespace
addons_test.go:670: (dbg) Run:  kubectl --context addons-029117 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.17s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 4.466719ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-v75rj" [a00533f6-30fb-4a16-80c5-b954680ad8a1] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003046472s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-h97gf" [b67fdfc4-4458-42f7-b829-1e941d38171a] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.00352402s
addons_test.go:342: (dbg) Run:  kubectl --context addons-029117 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-029117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-029117 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.998262231s)
addons_test.go:361: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 ip
2024/09/17 17:49:38 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.00s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-029117 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-029117 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-029117 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [9225fa7b-81bd-4dea-aee2-4745c9120c2f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [9225fa7b-81bd-4dea-aee2-4745c9120c2f] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.00403117s
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-029117 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:308: (dbg) Done: out/minikube-linux-arm64 -p addons-029117 addons disable ingress-dns --alsologtostderr -v=1: (1.09638676s)
addons_test.go:313: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-linux-arm64 -p addons-029117 addons disable ingress --alsologtostderr -v=1: (7.907752939s)
--- PASS: TestAddons/parallel/Ingress (19.86s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-vfsbg" [738585f5-cf0b-4c58-b7d8-2fdab3e54f78] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:848: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.004446078s
addons_test.go:851: (dbg) Run:  out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-029117
addons_test.go:851: (dbg) Done: out/minikube-linux-arm64 addons disable inspektor-gadget -p addons-029117: (5.964466116s)
--- PASS: TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.83s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.613896ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-smwfl" [1bd3cf63-4339-4e32-943d-b6ae1e132815] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.003609528s
addons_test.go:417: (dbg) Run:  kubectl --context addons-029117 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.83s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:567: csi-hostpath-driver pods stabilized in 9.233556ms
addons_test.go:570: (dbg) Run:  kubectl --context addons-029117 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:575: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:580: (dbg) Run:  kubectl --context addons-029117 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:585: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2c6d8eeb-a4cf-4836-85a4-3fc82fe816bc] Pending
helpers_test.go:344: "task-pv-pod" [2c6d8eeb-a4cf-4836-85a4-3fc82fe816bc] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2c6d8eeb-a4cf-4836-85a4-3fc82fe816bc] Running
addons_test.go:585: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.012288522s
addons_test.go:590: (dbg) Run:  kubectl --context addons-029117 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:595: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-029117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-029117 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:600: (dbg) Run:  kubectl --context addons-029117 delete pod task-pv-pod
addons_test.go:600: (dbg) Done: kubectl --context addons-029117 delete pod task-pv-pod: (1.486907871s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-029117 delete pvc hpvc
addons_test.go:612: (dbg) Run:  kubectl --context addons-029117 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:617: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:622: (dbg) Run:  kubectl --context addons-029117 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:627: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0d7c8807-2bcb-49c8-b170-d37f55f87faf] Pending
helpers_test.go:344: "task-pv-pod-restore" [0d7c8807-2bcb-49c8-b170-d37f55f87faf] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0d7c8807-2bcb-49c8-b170-d37f55f87faf] Running
addons_test.go:627: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00435185s
addons_test.go:632: (dbg) Run:  kubectl --context addons-029117 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Done: kubectl --context addons-029117 delete pod task-pv-pod-restore: (1.420277374s)
addons_test.go:636: (dbg) Run:  kubectl --context addons-029117 delete pvc hpvc-restore
addons_test.go:640: (dbg) Run:  kubectl --context addons-029117 delete volumesnapshot new-snapshot-demo
addons_test.go:644: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:644: (dbg) Done: out/minikube-linux-arm64 -p addons-029117 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.775421936s)
addons_test.go:648: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:648: (dbg) Done: out/minikube-linux-arm64 -p addons-029117 addons disable volumesnapshots --alsologtostderr -v=1: (1.043273187s)
--- PASS: TestAddons/parallel/CSI (58.60s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.05s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:830: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-029117 --alsologtostderr -v=1
addons_test.go:830: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-029117 --alsologtostderr -v=1: (1.118532602s)
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-9njlr" [45f48976-88ca-4166-9223-7e2f831f8cf0] Pending
helpers_test.go:344: "headlamp-7b5c95b59d-9njlr" [45f48976-88ca-4166-9223-7e2f831f8cf0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-9njlr" [45f48976-88ca-4166-9223-7e2f831f8cf0] Running
addons_test.go:835: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.005076708s
addons_test.go:839: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable headlamp --alsologtostderr -v=1
addons_test.go:839: (dbg) Done: out/minikube-linux-arm64 -p addons-029117 addons disable headlamp --alsologtostderr -v=1: (5.923143479s)
--- PASS: TestAddons/parallel/Headlamp (16.05s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-769b77f747-hcxxd" [7d803cb8-8902-4f58-85b4-3c88a1303630] Running
addons_test.go:867: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.022887275s
addons_test.go:870: (dbg) Run:  out/minikube-linux-arm64 addons disable cloud-spanner -p addons-029117
--- PASS: TestAddons/parallel/CloudSpanner (6.74s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.82s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:982: (dbg) Run:  kubectl --context addons-029117 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:988: (dbg) Run:  kubectl --context addons-029117 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:992: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-029117 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9c667418-4318-4fc7-9332-68868b53849d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9c667418-4318-4fc7-9332-68868b53849d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9c667418-4318-4fc7-9332-68868b53849d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:995: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003786591s
addons_test.go:1000: (dbg) Run:  kubectl --context addons-029117 get pvc test-pvc -o=json
addons_test.go:1009: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 ssh "cat /opt/local-path-provisioner/pvc-0fa75f07-3057-4ee8-9c8d-e4bb4f6d3662_default_test-pvc/file1"
addons_test.go:1021: (dbg) Run:  kubectl --context addons-029117 delete pod test-local-path
addons_test.go:1025: (dbg) Run:  kubectl --context addons-029117 delete pvc test-pvc
addons_test.go:1029: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (8.82s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-892xz" [3b344a6c-9dad-48ec-a4e2-e27d6e6e7441] Running
addons_test.go:1061: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004795529s
addons_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 addons disable nvidia-device-plugin -p addons-029117
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.68s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-b54cz" [2fd4671c-c58d-418a-b3b6-5fc09d1eceb4] Running
addons_test.go:1072: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004330112s
addons_test.go:1076: (dbg) Run:  out/minikube-linux-arm64 -p addons-029117 addons disable yakd --alsologtostderr -v=1
addons_test.go:1076: (dbg) Done: out/minikube-linux-arm64 -p addons-029117 addons disable yakd --alsologtostderr -v=1: (5.908475974s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.31s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-029117
addons_test.go:174: (dbg) Done: out/minikube-linux-arm64 stop -p addons-029117: (12.037925078s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-029117
addons_test.go:182: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-029117
addons_test.go:187: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-029117
--- PASS: TestAddons/StoppedEnableDisable (12.31s)

                                                
                                    
x
+
TestCertOptions (35.73s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-926867 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-926867 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (33.091002672s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-926867 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-926867 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-926867 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-926867" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-926867
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-926867: (1.957354944s)
--- PASS: TestCertOptions (35.73s)

                                                
                                    
x
+
TestCertExpiration (227.12s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-701681 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E0917 18:29:41.175015  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-701681 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (37.518816428s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-701681 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-701681 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.645998919s)
helpers_test.go:175: Cleaning up "cert-expiration-701681" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-701681
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-701681: (1.955157978s)
--- PASS: TestCertExpiration (227.12s)

                                                
                                    
x
+
TestForceSystemdFlag (33.22s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-098523 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E0917 18:28:46.664718  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-098523 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (30.972650187s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-098523 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-098523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-098523
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-098523: (1.935363469s)
--- PASS: TestForceSystemdFlag (33.22s)

                                                
                                    
x
+
TestForceSystemdEnv (40.37s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-488659 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-488659 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (37.828114818s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-488659 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-488659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-488659
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-488659: (2.117426075s)
--- PASS: TestForceSystemdEnv (40.37s)

                                                
                                    
x
+
TestDockerEnvContainerd (43.88s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-692209 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-692209 --driver=docker  --container-runtime=containerd: (28.221995918s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-692209"
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-FEKfDCrIn7ZB/agent.318003" SSH_AGENT_PID="318004" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-FEKfDCrIn7ZB/agent.318003" SSH_AGENT_PID="318004" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-FEKfDCrIn7ZB/agent.318003" SSH_AGENT_PID="318004" DOCKER_HOST=ssh://docker@127.0.0.1:33143 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.228958088s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-FEKfDCrIn7ZB/agent.318003" SSH_AGENT_PID="318004" DOCKER_HOST=ssh://docker@127.0.0.1:33143 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-692209" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-692209
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-692209: (1.967758402s)
--- PASS: TestDockerEnvContainerd (43.88s)

                                                
                                    
x
+
TestErrorSpam/setup (28.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-991024 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-991024 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-991024 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-991024 --driver=docker  --container-runtime=containerd: (28.067713896s)
--- PASS: TestErrorSpam/setup (28.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 start --dry-run
--- PASS: TestErrorSpam/start (0.77s)

                                                
                                    
x
+
TestErrorSpam/status (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 status
--- PASS: TestErrorSpam/status (1.16s)

                                                
                                    
x
+
TestErrorSpam/pause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 pause
--- PASS: TestErrorSpam/pause (1.84s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.1s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 unpause
--- PASS: TestErrorSpam/unpause (2.10s)

                                                
                                    
x
+
TestErrorSpam/stop (1.44s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 stop: (1.242003223s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-991024 --log_dir /tmp/nospam-991024 stop
--- PASS: TestErrorSpam/stop (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19662-293874/.minikube/files/etc/test/nested/copy/299255/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (54.3s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-566937 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-566937 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (54.294840552s)
--- PASS: TestFunctional/serial/StartWithProxy (54.30s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.43s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-566937 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-566937 --alsologtostderr -v=8: (6.420436192s)
functional_test.go:663: soft start took 6.428073655s for "functional-566937" cluster.
--- PASS: TestFunctional/serial/SoftStart (6.43s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-566937 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 cache add registry.k8s.io/pause:3.1: (1.571551809s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 cache add registry.k8s.io/pause:3.3: (1.442393825s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 cache add registry.k8s.io/pause:latest: (1.235227336s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-566937 /tmp/TestFunctionalserialCacheCmdcacheadd_local594087461/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cache add minikube-local-cache-test:functional-566937
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cache delete minikube-local-cache-test:functional-566937
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-566937
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.32s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (289.060533ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 cache reload: (1.087848307s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 kubectl -- --context functional-566937 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-566937 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-566937 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-566937 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.263364578s)
functional_test.go:761: restart took 42.263479975s for "functional-566937" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-566937 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 logs: (1.710338118s)
--- PASS: TestFunctional/serial/LogsCmd (1.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 logs --file /tmp/TestFunctionalserialLogsFileCmd1193376908/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 logs --file /tmp/TestFunctionalserialLogsFileCmd1193376908/001/logs.txt: (1.858151218s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.86s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-566937 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-566937
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-566937: exit status 115 (463.329674ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:30391 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-566937 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 config get cpus: exit status 14 (70.596463ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 config get cpus: exit status 14 (72.405784ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-566937 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-566937 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 335324: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.42s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-566937 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-566937 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (205.088567ms)

                                                
                                                
-- stdout --
	* [functional-566937] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:55:17.934664  334766 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:55:17.934803  334766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:55:17.934816  334766 out.go:358] Setting ErrFile to fd 2...
	I0917 17:55:17.934822  334766 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:55:17.935071  334766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 17:55:17.935430  334766 out.go:352] Setting JSON to false
	I0917 17:55:17.936499  334766 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5864,"bootTime":1726589854,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0917 17:55:17.936671  334766 start.go:139] virtualization:  
	I0917 17:55:17.941568  334766 out.go:177] * [functional-566937] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0917 17:55:17.944160  334766 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:55:17.944275  334766 notify.go:220] Checking for updates...
	I0917 17:55:17.949094  334766 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:55:17.951961  334766 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	I0917 17:55:17.954155  334766 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	I0917 17:55:17.956821  334766 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 17:55:17.959826  334766 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:55:17.962779  334766 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 17:55:17.963299  334766 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:55:17.988817  334766 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:55:17.988946  334766 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:55:18.060355  334766 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-17 17:55:18.04923691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:55:18.060484  334766 docker.go:318] overlay module found
	I0917 17:55:18.064623  334766 out.go:177] * Using the docker driver based on existing profile
	I0917 17:55:18.067762  334766 start.go:297] selected driver: docker
	I0917 17:55:18.067785  334766 start.go:901] validating driver "docker" against &{Name:functional-566937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-566937 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:55:18.069199  334766 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:55:18.073394  334766 out.go:201] 
	W0917 17:55:18.075465  334766 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0917 17:55:18.077741  334766 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-566937 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-566937 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-566937 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (240.847566ms)

                                                
                                                
-- stdout --
	* [functional-566937] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:55:18.412256  334880 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:55:18.412429  334880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:55:18.412462  334880 out.go:358] Setting ErrFile to fd 2...
	I0917 17:55:18.412493  334880 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:55:18.413958  334880 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 17:55:18.414394  334880 out.go:352] Setting JSON to false
	I0917 17:55:18.415411  334880 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5864,"bootTime":1726589854,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0917 17:55:18.415522  334880 start.go:139] virtualization:  
	I0917 17:55:18.418153  334880 out.go:177] * [functional-566937] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I0917 17:55:18.420745  334880 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 17:55:18.420837  334880 notify.go:220] Checking for updates...
	I0917 17:55:18.425752  334880 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 17:55:18.427789  334880 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	I0917 17:55:18.433804  334880 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	I0917 17:55:18.435983  334880 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 17:55:18.438023  334880 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 17:55:18.448260  334880 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 17:55:18.448874  334880 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 17:55:18.481429  334880 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 17:55:18.481558  334880 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:55:18.570624  334880 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-09-17 17:55:18.558635402 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:55:18.570737  334880 docker.go:318] overlay module found
	I0917 17:55:18.573462  334880 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0917 17:55:18.575491  334880 start.go:297] selected driver: docker
	I0917 17:55:18.575511  334880 start.go:901] validating driver "docker" against &{Name:functional-566937 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726589491-19662@sha256:6370b9fec173944088c2d87d44b01819c0ec611a83d9e2f38d36352dff8121a4 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-566937 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0917 17:55:18.575632  334880 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 17:55:18.578456  334880 out.go:201] 
	W0917 17:55:18.580158  334880 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0917 17:55:18.582093  334880 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (7.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-566937 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-566937 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-69g4k" [f8be7dce-5a9a-4db2-ab48-b623c30384e2] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-69g4k" [f8be7dce-5a9a-4db2-ab48-b623c30384e2] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 7.004970038s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31885
functional_test.go:1675: http://192.168.49.2:31885: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-69g4k

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31885
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (7.72s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [96570481-dc9e-4860-8759-f66aa672780c] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004529457s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-566937 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-566937 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-566937 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-566937 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [01ded282-d8d9-472c-9d13-5981dea59549] Pending
helpers_test.go:344: "sp-pod" [01ded282-d8d9-472c-9d13-5981dea59549] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [01ded282-d8d9-472c-9d13-5981dea59549] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00652772s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-566937 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-566937 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-566937 delete -f testdata/storage-provisioner/pod.yaml: (1.375536176s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-566937 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4e0f6920-100a-4fe3-88c5-25794b460a4f] Pending
helpers_test.go:344: "sp-pod" [4e0f6920-100a-4fe3-88c5-25794b460a4f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003966902s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-566937 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh -n functional-566937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cp functional-566937:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3474846286/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh -n functional-566937 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh -n functional-566937 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/299255/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo cat /etc/test/nested/copy/299255/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/299255.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo cat /etc/ssl/certs/299255.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/299255.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo cat /usr/share/ca-certificates/299255.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/2992552.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo cat /etc/ssl/certs/2992552.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/2992552.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo cat /usr/share/ca-certificates/2992552.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-566937 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 ssh "sudo systemctl is-active docker": exit status 1 (374.924716ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo systemctl is-active crio"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 ssh "sudo systemctl is-active crio": exit status 1 (362.832166ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 version -o=json --components: (1.252875358s)
--- PASS: TestFunctional/parallel/Version/components (1.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-566937 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-566937
docker.io/kindest/kindnetd:v20240813-c6f155d6
docker.io/kicbase/echo-server:functional-566937
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-566937 image ls --format short --alsologtostderr:
I0917 17:55:20.717917  335298 out.go:345] Setting OutFile to fd 1 ...
I0917 17:55:20.718075  335298 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:20.718083  335298 out.go:358] Setting ErrFile to fd 2...
I0917 17:55:20.718089  335298 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:20.718442  335298 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
I0917 17:55:20.719369  335298 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:20.719519  335298 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:20.720039  335298 cli_runner.go:164] Run: docker container inspect functional-566937 --format={{.State.Status}}
I0917 17:55:20.757298  335298 ssh_runner.go:195] Run: systemctl --version
I0917 17:55:20.757375  335298 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-566937
I0917 17:55:20.801505  335298 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/functional-566937/id_rsa Username:docker}
I0917 17:55:20.918185  335298 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-566937 image ls --format table --alsologtostderr:
|---------------------------------------------|--------------------|---------------|--------|
|                    Image                    |        Tag         |   Image ID    |  Size  |
|---------------------------------------------|--------------------|---------------|--------|
| docker.io/kicbase/echo-server               | functional-566937  | sha256:ce2d2c | 2.17MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc       | sha256:1611cd | 1.94MB |
| registry.k8s.io/kube-proxy                  | v1.31.1            | sha256:24a140 | 26.8MB |
| localhost/my-image                          | functional-566937  | sha256:1df484 | 831kB  |
| registry.k8s.io/kube-apiserver              | v1.31.1            | sha256:d3f53a | 25.7MB |
| registry.k8s.io/pause                       | latest             | sha256:8cb209 | 71.3kB |
| docker.io/kindest/kindnetd                  | v20240813-c6f155d6 | sha256:6a23fa | 33.3MB |
| docker.io/library/minikube-local-cache-test | functional-566937  | sha256:d50c9c | 992B   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                 | sha256:ba04bb | 8.03MB |
| registry.k8s.io/pause                       | 3.1                | sha256:8057e0 | 262kB  |
| docker.io/library/nginx                     | latest             | sha256:195245 | 67.7MB |
| registry.k8s.io/echoserver-arm              | 1.8                | sha256:72565b | 45.3MB |
| registry.k8s.io/kube-scheduler              | v1.31.1            | sha256:7f8aa3 | 18.5MB |
| registry.k8s.io/etcd                        | 3.5.15-0           | sha256:27e383 | 66.5MB |
| registry.k8s.io/kube-controller-manager     | v1.31.1            | sha256:279f38 | 23.9MB |
| registry.k8s.io/pause                       | 3.10               | sha256:afb617 | 268kB  |
| registry.k8s.io/pause                       | 3.3                | sha256:3d1873 | 249kB  |
| docker.io/library/nginx                     | alpine             | sha256:b887ac | 19.6MB |
| registry.k8s.io/coredns/coredns             | v1.11.3            | sha256:2f6c96 | 16.9MB |
|---------------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-566937 image ls --format table --alsologtostderr:
I0917 17:55:25.869193  335804 out.go:345] Setting OutFile to fd 1 ...
I0917 17:55:25.869374  335804 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:25.869387  335804 out.go:358] Setting ErrFile to fd 2...
I0917 17:55:25.869393  335804 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:25.869637  335804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
I0917 17:55:25.870330  335804 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:25.870492  335804 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:25.871010  335804 cli_runner.go:164] Run: docker container inspect functional-566937 --format={{.State.Status}}
I0917 17:55:25.893679  335804 ssh_runner.go:195] Run: systemctl --version
I0917 17:55:25.893743  335804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-566937
I0917 17:55:25.917313  335804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/functional-566937/id_rsa Username:docker}
I0917 17:55:26.017114  335804 ssh_runner.go:195] Run: sudo crictl images --output json
2024/09/17 17:55:26 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-566937 image ls --format json --alsologtostderr:
[{"id":"sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51","repoDigests":["docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d6"],"size":"33309097"},{"id":"sha256:1df48446083559753cd2ad42c510cf7ba5a9d1e33e45d6e05b7f4c9f16bdb37a","repoDigests":[],"repoTags":["localhost/my-image:functional-566937"],"size":"830618"},{"id":"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"66535646"},{"id":"sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"23948670"},{"id":"sha256:24a140c548c
075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"26756812"},{"id":"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"267933"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3"],"repoTags":["docker.io/library/nginx:latest"],"size":"67695038"},{"id":"sha256:1611cd07b61d57dbbfebe6d
b242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"16948420"},{"id":"sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0"],"repoTags":["registry.k8s.
io/kube-scheduler:v1.31.1"],"size":"18507674"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-566937"],"size":"2173567"},{"id":"sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552","repoDigests":["docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"19621732"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:d50c9c0144864715ecb63cb97318e8aa6633aad9dbe015bfe6099a8b932
2364c","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-566937"],"size":"992"},{"id":"sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"45324675"},{"id":"sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853","repoDigests":["registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"25687130"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-566937 image ls --format json --alsologtostderr:
I0917 17:55:25.576420  335738 out.go:345] Setting OutFile to fd 1 ...
I0917 17:55:25.576584  335738 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:25.576614  335738 out.go:358] Setting ErrFile to fd 2...
I0917 17:55:25.576636  335738 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:25.577038  335738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
I0917 17:55:25.578478  335738 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:25.578663  335738 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:25.579447  335738 cli_runner.go:164] Run: docker container inspect functional-566937 --format={{.State.Status}}
I0917 17:55:25.604382  335738 ssh_runner.go:195] Run: systemctl --version
I0917 17:55:25.604467  335738 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-566937
I0917 17:55:25.626287  335738 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/functional-566937/id_rsa Username:docker}
I0917 17:55:25.737431  335738 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-566937 image ls --format yaml --alsologtostderr:
- id: sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "16948420"
- id: sha256:279f381cb37365bbbcd133c9531fba9c2beb0f38dbbe6ddfcd0b1b1643d3450e
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "23948670"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-566937
size: "2173567"
- id: sha256:195245f0c79279e8b8e012efa02c91dad4cf7d0e44c0f4382fea68cd93088e6c
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
repoTags:
- docker.io/library/nginx:latest
size: "67695038"
- id: sha256:72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "45324675"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "267933"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:d3f53a98c0a9d9163c4848bcf34b2d2f5e1e3691b79f3d1dd6d0206809e02853
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "25687130"
- id: sha256:24a140c548c075e487e45d0ee73b1aa89f8bfb40c08a57e05975559728822b1d
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "26756812"
- id: sha256:6a23fa8fd2b78ab58e42ba273808edc936a9c53d8ac4a919f6337be094843a51
repoDigests:
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "33309097"
- id: sha256:d50c9c0144864715ecb63cb97318e8aa6633aad9dbe015bfe6099a8b9322364c
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-566937
size: "992"
- id: sha256:b887aca7aed6134b029401507d27ac9c8fbfc5a6cf510d254bdf4ac841cf1552
repoDigests:
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "19621732"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "66535646"
- id: sha256:7f8aa378bb47dffcf430f3a601abe39137e88aee0238e23ed8530fdd18dab82d
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "18507674"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-566937 image ls --format yaml --alsologtostderr:
I0917 17:55:21.045178  335454 out.go:345] Setting OutFile to fd 1 ...
I0917 17:55:21.045518  335454 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:21.045544  335454 out.go:358] Setting ErrFile to fd 2...
I0917 17:55:21.045564  335454 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:21.045880  335454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
I0917 17:55:21.046649  335454 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:21.046841  335454 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:21.048268  335454 cli_runner.go:164] Run: docker container inspect functional-566937 --format={{.State.Status}}
I0917 17:55:21.071743  335454 ssh_runner.go:195] Run: systemctl --version
I0917 17:55:21.071797  335454 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-566937
I0917 17:55:21.093188  335454 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/functional-566937/id_rsa Username:docker}
I0917 17:55:21.197397  335454 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 ssh pgrep buildkitd: exit status 1 (317.10132ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image build -t localhost/my-image:functional-566937 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 image build -t localhost/my-image:functional-566937 testdata/build --alsologtostderr: (3.678161436s)
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-566937 image build -t localhost/my-image:functional-566937 testdata/build --alsologtostderr:
I0917 17:55:21.633405  335542 out.go:345] Setting OutFile to fd 1 ...
I0917 17:55:21.634029  335542 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:21.634043  335542 out.go:358] Setting ErrFile to fd 2...
I0917 17:55:21.634050  335542 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0917 17:55:21.634312  335542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
I0917 17:55:21.635003  335542 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:21.636251  335542 config.go:182] Loaded profile config "functional-566937": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
I0917 17:55:21.636783  335542 cli_runner.go:164] Run: docker container inspect functional-566937 --format={{.State.Status}}
I0917 17:55:21.661815  335542 ssh_runner.go:195] Run: systemctl --version
I0917 17:55:21.661869  335542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-566937
I0917 17:55:21.681654  335542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/functional-566937/id_rsa Username:docker}
I0917 17:55:21.784847  335542 build_images.go:161] Building image from path: /tmp/build.1350854195.tar
I0917 17:55:21.784934  335542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0917 17:55:21.796400  335542 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1350854195.tar
I0917 17:55:21.800828  335542 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1350854195.tar: stat -c "%s %y" /var/lib/minikube/build/build.1350854195.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1350854195.tar': No such file or directory
I0917 17:55:21.800858  335542 ssh_runner.go:362] scp /tmp/build.1350854195.tar --> /var/lib/minikube/build/build.1350854195.tar (3072 bytes)
I0917 17:55:21.838366  335542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1350854195
I0917 17:55:21.850082  335542 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1350854195 -xf /var/lib/minikube/build/build.1350854195.tar
I0917 17:55:21.864394  335542 containerd.go:394] Building image: /var/lib/minikube/build/build.1350854195
I0917 17:55:21.864471  335542 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1350854195 --local dockerfile=/var/lib/minikube/build/build.1350854195 --output type=image,name=localhost/my-image:functional-566937
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:d50776983aa88a7c897bf2549cb5e71d45fa1b3c0a8dd37982bb3f60f0eea6a4
#8 exporting manifest sha256:d50776983aa88a7c897bf2549cb5e71d45fa1b3c0a8dd37982bb3f60f0eea6a4 0.0s done
#8 exporting config sha256:1df48446083559753cd2ad42c510cf7ba5a9d1e33e45d6e05b7f4c9f16bdb37a 0.0s done
#8 naming to localhost/my-image:functional-566937 done
#8 DONE 0.2s
I0917 17:55:25.220901  335542 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1350854195 --local dockerfile=/var/lib/minikube/build/build.1350854195 --output type=image,name=localhost/my-image:functional-566937: (3.356400979s)
I0917 17:55:25.220985  335542 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1350854195
I0917 17:55:25.237164  335542 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1350854195.tar
I0917 17:55:25.247685  335542 build_images.go:217] Built localhost/my-image:functional-566937 from /tmp/build.1350854195.tar
I0917 17:55:25.247720  335542 build_images.go:133] succeeded building to: functional-566937
I0917 17:55:25.247725  335542 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-566937
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image load --daemon kicbase/echo-server:functional-566937 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 image load --daemon kicbase/echo-server:functional-566937 --alsologtostderr: (1.213131159s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image load --daemon kicbase/echo-server:functional-566937 --alsologtostderr
functional_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 image load --daemon kicbase/echo-server:functional-566937 --alsologtostderr: (1.152809819s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-566937 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-566937 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-sgmnj" [8b3528e6-0be0-420c-9dd6-5430e832ee26] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-sgmnj" [8b3528e6-0be0-420c-9dd6-5430e832ee26] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.003635719s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-566937
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image load --daemon kicbase/echo-server:functional-566937 --alsologtostderr
functional_test.go:245: (dbg) Done: out/minikube-linux-arm64 -p functional-566937 image load --daemon kicbase/echo-server:functional-566937 --alsologtostderr: (1.159191035s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image save kicbase/echo-server:functional-566937 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image rm kicbase/echo-server:functional-566937 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-566937
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 image save --daemon kicbase/echo-server:functional-566937 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect kicbase/echo-server:functional-566937
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-566937 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-566937 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-566937 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 331619: os: process already finished
helpers_test.go:502: unable to terminate pid 331506: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-566937 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-566937 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-566937 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [e0a6cb30-6b90-4ea8-a700-628f901d46cc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [e0a6cb30-6b90-4ea8-a700-628f901d46cc] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003816637s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 service list -o json
functional_test.go:1494: Took "345.433831ms" to run "out/minikube-linux-arm64 -p functional-566937 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30168
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30168
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-566937 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.101.10.217 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-566937 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "331.382631ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "54.44424ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "343.129672ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "65.738779ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdany-port1667123318/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1726595705153266518" to /tmp/TestFunctionalparallelMountCmdany-port1667123318/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1726595705153266518" to /tmp/TestFunctionalparallelMountCmdany-port1667123318/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1726595705153266518" to /tmp/TestFunctionalparallelMountCmdany-port1667123318/001/test-1726595705153266518
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (331.595947ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 17 17:55 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 17 17:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 17 17:55 test-1726595705153266518
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh cat /mount-9p/test-1726595705153266518
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-566937 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0a22c436-61df-4844-ae93-81e48771c706] Pending
helpers_test.go:344: "busybox-mount" [0a22c436-61df-4844-ae93-81e48771c706] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0a22c436-61df-4844-ae93-81e48771c706] Running
helpers_test.go:344: "busybox-mount" [0a22c436-61df-4844-ae93-81e48771c706] Running / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0a22c436-61df-4844-ae93-81e48771c706] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.003838807s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-566937 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdany-port1667123318/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdspecific-port1074182185/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (350.803085ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdspecific-port1074182185/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 ssh "sudo umount -f /mount-9p": exit status 1 (279.883389ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-566937 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdspecific-port1074182185/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.81s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdVerifyCleanup264290829/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdVerifyCleanup264290829/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdVerifyCleanup264290829/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T" /mount1: exit status 1 (538.331148ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-566937 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-566937 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdVerifyCleanup264290829/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdVerifyCleanup264290829/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-566937 /tmp/TestFunctionalparallelMountCmdVerifyCleanup264290829/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.80s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-566937
--- PASS: TestFunctional/delete_echo-server_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-566937
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-566937
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (138.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-974981 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
E0917 17:55:43.599465  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:43.606578  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:43.618020  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:43.639806  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:43.681259  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:43.762620  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:43.924025  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:44.245283  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:44.886931  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:46.168189  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:48.729704  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:55:53.851472  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:56:04.093686  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:56:24.575531  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:57:05.536970  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-974981 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m17.788661683s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (138.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (30.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-974981 -- rollout status deployment/busybox: (27.358612642s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-cvwbb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-nzjgp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-t9gl9 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-cvwbb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-nzjgp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-t9gl9 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-cvwbb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-nzjgp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-t9gl9 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (30.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-cvwbb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-cvwbb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-nzjgp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-nzjgp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-t9gl9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-974981 -- exec busybox-7dff88458-t9gl9 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (21.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-974981 -v=7 --alsologtostderr
E0917 17:58:27.458342  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-974981 -v=7 --alsologtostderr: (20.806142961s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr: (1.108079593s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (21.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-974981 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-linux-arm64 -p ha-974981 status --output json -v=7 --alsologtostderr: (1.039922396s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp testdata/cp-test.txt ha-974981:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2299101602/001/cp-test_ha-974981.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981:/home/docker/cp-test.txt ha-974981-m02:/home/docker/cp-test_ha-974981_ha-974981-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m02 "sudo cat /home/docker/cp-test_ha-974981_ha-974981-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981:/home/docker/cp-test.txt ha-974981-m03:/home/docker/cp-test_ha-974981_ha-974981-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m03 "sudo cat /home/docker/cp-test_ha-974981_ha-974981-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981:/home/docker/cp-test.txt ha-974981-m04:/home/docker/cp-test_ha-974981_ha-974981-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m04 "sudo cat /home/docker/cp-test_ha-974981_ha-974981-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp testdata/cp-test.txt ha-974981-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2299101602/001/cp-test_ha-974981-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m02:/home/docker/cp-test.txt ha-974981:/home/docker/cp-test_ha-974981-m02_ha-974981.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981 "sudo cat /home/docker/cp-test_ha-974981-m02_ha-974981.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m02:/home/docker/cp-test.txt ha-974981-m03:/home/docker/cp-test_ha-974981-m02_ha-974981-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m03 "sudo cat /home/docker/cp-test_ha-974981-m02_ha-974981-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m02:/home/docker/cp-test.txt ha-974981-m04:/home/docker/cp-test_ha-974981-m02_ha-974981-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m04 "sudo cat /home/docker/cp-test_ha-974981-m02_ha-974981-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp testdata/cp-test.txt ha-974981-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2299101602/001/cp-test_ha-974981-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m03:/home/docker/cp-test.txt ha-974981:/home/docker/cp-test_ha-974981-m03_ha-974981.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981 "sudo cat /home/docker/cp-test_ha-974981-m03_ha-974981.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m03:/home/docker/cp-test.txt ha-974981-m02:/home/docker/cp-test_ha-974981-m03_ha-974981-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m02 "sudo cat /home/docker/cp-test_ha-974981-m03_ha-974981-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m03:/home/docker/cp-test.txt ha-974981-m04:/home/docker/cp-test_ha-974981-m03_ha-974981-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m04 "sudo cat /home/docker/cp-test_ha-974981-m03_ha-974981-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp testdata/cp-test.txt ha-974981-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2299101602/001/cp-test_ha-974981-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m04:/home/docker/cp-test.txt ha-974981:/home/docker/cp-test_ha-974981-m04_ha-974981.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981 "sudo cat /home/docker/cp-test_ha-974981-m04_ha-974981.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m04:/home/docker/cp-test.txt ha-974981-m02:/home/docker/cp-test_ha-974981-m04_ha-974981-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m02 "sudo cat /home/docker/cp-test_ha-974981-m04_ha-974981-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 cp ha-974981-m04:/home/docker/cp-test.txt ha-974981-m03:/home/docker/cp-test_ha-974981-m04_ha-974981-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 ssh -n ha-974981-m03 "sudo cat /home/docker/cp-test_ha-974981-m04_ha-974981-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-arm64 -p ha-974981 node stop m02 -v=7 --alsologtostderr: (12.109112186s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr: exit status 7 (757.33108ms)

                                                
                                                
-- stdout --
	ha-974981
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-974981-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-974981-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-974981-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 17:59:15.783775  352273 out.go:345] Setting OutFile to fd 1 ...
	I0917 17:59:15.784001  352273 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:59:15.784028  352273 out.go:358] Setting ErrFile to fd 2...
	I0917 17:59:15.784112  352273 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 17:59:15.784416  352273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 17:59:15.784637  352273 out.go:352] Setting JSON to false
	I0917 17:59:15.784695  352273 mustload.go:65] Loading cluster: ha-974981
	I0917 17:59:15.784797  352273 notify.go:220] Checking for updates...
	I0917 17:59:15.785322  352273 config.go:182] Loaded profile config "ha-974981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 17:59:15.785363  352273 status.go:255] checking status of ha-974981 ...
	I0917 17:59:15.786039  352273 cli_runner.go:164] Run: docker container inspect ha-974981 --format={{.State.Status}}
	I0917 17:59:15.810293  352273 status.go:330] ha-974981 host status = "Running" (err=<nil>)
	I0917 17:59:15.810321  352273 host.go:66] Checking if "ha-974981" exists ...
	I0917 17:59:15.810654  352273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-974981
	I0917 17:59:15.842176  352273 host.go:66] Checking if "ha-974981" exists ...
	I0917 17:59:15.842554  352273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:59:15.842616  352273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-974981
	I0917 17:59:15.863991  352273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/ha-974981/id_rsa Username:docker}
	I0917 17:59:15.961018  352273 ssh_runner.go:195] Run: systemctl --version
	I0917 17:59:15.965459  352273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:59:15.977659  352273 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 17:59:16.051377  352273 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:55 OomKillDisable:true NGoroutines:71 SystemTime:2024-09-17 17:59:16.033364244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 17:59:16.052030  352273 kubeconfig.go:125] found "ha-974981" server: "https://192.168.49.254:8443"
	I0917 17:59:16.052072  352273 api_server.go:166] Checking apiserver status ...
	I0917 17:59:16.052127  352273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:59:16.064777  352273 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1462/cgroup
	I0917 17:59:16.074983  352273 api_server.go:182] apiserver freezer: "6:freezer:/docker/aaef47ba4fff01d4f43b35e37aa886816b6bd5710293837f47230f8e1d904cd0/kubepods/burstable/pod4a3edd4d3997498ea742cbad9df19757/17ab610f3e30117fbbfca040cb6800b4139a8c4ae11c2fb49f67f7b92b079f23"
	I0917 17:59:16.075060  352273 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/aaef47ba4fff01d4f43b35e37aa886816b6bd5710293837f47230f8e1d904cd0/kubepods/burstable/pod4a3edd4d3997498ea742cbad9df19757/17ab610f3e30117fbbfca040cb6800b4139a8c4ae11c2fb49f67f7b92b079f23/freezer.state
	I0917 17:59:16.084355  352273 api_server.go:204] freezer state: "THAWED"
	I0917 17:59:16.084387  352273 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 17:59:16.092349  352273 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 17:59:16.092378  352273 status.go:422] ha-974981 apiserver status = Running (err=<nil>)
	I0917 17:59:16.092390  352273 status.go:257] ha-974981 status: &{Name:ha-974981 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:59:16.092437  352273 status.go:255] checking status of ha-974981-m02 ...
	I0917 17:59:16.092814  352273 cli_runner.go:164] Run: docker container inspect ha-974981-m02 --format={{.State.Status}}
	I0917 17:59:16.109654  352273 status.go:330] ha-974981-m02 host status = "Stopped" (err=<nil>)
	I0917 17:59:16.109677  352273 status.go:343] host is not running, skipping remaining checks
	I0917 17:59:16.109684  352273 status.go:257] ha-974981-m02 status: &{Name:ha-974981-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:59:16.109705  352273 status.go:255] checking status of ha-974981-m03 ...
	I0917 17:59:16.110032  352273 cli_runner.go:164] Run: docker container inspect ha-974981-m03 --format={{.State.Status}}
	I0917 17:59:16.126784  352273 status.go:330] ha-974981-m03 host status = "Running" (err=<nil>)
	I0917 17:59:16.126810  352273 host.go:66] Checking if "ha-974981-m03" exists ...
	I0917 17:59:16.127126  352273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-974981-m03
	I0917 17:59:16.145254  352273 host.go:66] Checking if "ha-974981-m03" exists ...
	I0917 17:59:16.145563  352273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:59:16.145606  352273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-974981-m03
	I0917 17:59:16.164051  352273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/ha-974981-m03/id_rsa Username:docker}
	I0917 17:59:16.266173  352273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:59:16.279755  352273 kubeconfig.go:125] found "ha-974981" server: "https://192.168.49.254:8443"
	I0917 17:59:16.279786  352273 api_server.go:166] Checking apiserver status ...
	I0917 17:59:16.279828  352273 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 17:59:16.290929  352273 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1304/cgroup
	I0917 17:59:16.300425  352273 api_server.go:182] apiserver freezer: "6:freezer:/docker/c39b4cfdeb06432b25f9549776e7621515263c0684482b147fab965a9be02cb0/kubepods/burstable/podb43bc0449080ea69e767b83814646488/0e9a3b2718a0e26b09f0750c70a860239cc54661d9b75aad0e7203da31affd2d"
	I0917 17:59:16.300524  352273 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/c39b4cfdeb06432b25f9549776e7621515263c0684482b147fab965a9be02cb0/kubepods/burstable/podb43bc0449080ea69e767b83814646488/0e9a3b2718a0e26b09f0750c70a860239cc54661d9b75aad0e7203da31affd2d/freezer.state
	I0917 17:59:16.309812  352273 api_server.go:204] freezer state: "THAWED"
	I0917 17:59:16.309891  352273 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0917 17:59:16.317899  352273 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0917 17:59:16.317977  352273 status.go:422] ha-974981-m03 apiserver status = Running (err=<nil>)
	I0917 17:59:16.318002  352273 status.go:257] ha-974981-m03 status: &{Name:ha-974981-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 17:59:16.318024  352273 status.go:255] checking status of ha-974981-m04 ...
	I0917 17:59:16.318347  352273 cli_runner.go:164] Run: docker container inspect ha-974981-m04 --format={{.State.Status}}
	I0917 17:59:16.335296  352273 status.go:330] ha-974981-m04 host status = "Running" (err=<nil>)
	I0917 17:59:16.335324  352273 host.go:66] Checking if "ha-974981-m04" exists ...
	I0917 17:59:16.335628  352273 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-974981-m04
	I0917 17:59:16.353425  352273 host.go:66] Checking if "ha-974981-m04" exists ...
	I0917 17:59:16.353744  352273 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 17:59:16.353797  352273 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-974981-m04
	I0917 17:59:16.370833  352273 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/ha-974981-m04/id_rsa Username:docker}
	I0917 17:59:16.473408  352273 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 17:59:16.485628  352273 status.go:257] ha-974981-m04 status: &{Name:ha-974981-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (20.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-linux-arm64 -p ha-974981 node start m02 -v=7 --alsologtostderr: (19.237507613s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Done: out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr: (1.057770445s)
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (20.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-974981 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-974981 -v=7 --alsologtostderr
E0917 17:59:41.175081  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:41.181377  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:41.192707  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:41.213986  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:41.255546  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:41.336960  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:41.498323  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:41.820206  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:42.462361  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:43.744338  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:46.305709  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 17:59:51.427753  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:00:01.669471  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-arm64 stop -p ha-974981 -v=7 --alsologtostderr: (26.439082987s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-arm64 start -p ha-974981 --wait=true -v=7 --alsologtostderr
E0917 18:00:22.151187  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:00:43.598147  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:01:03.112648  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:01:11.299598  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-arm64 start -p ha-974981 --wait=true -v=7 --alsologtostderr: (1m36.134898248s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-974981
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (122.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-arm64 -p ha-974981 node delete m03 -v=7 --alsologtostderr: (9.707147042s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 stop -v=7 --alsologtostderr
E0917 18:02:25.034138  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-arm64 -p ha-974981 stop -v=7 --alsologtostderr: (35.875381223s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr: exit status 7 (116.677017ms)

                                                
                                                
-- stdout --
	ha-974981
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-974981-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-974981-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:02:28.091805  365951 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:02:28.092096  365951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:02:28.092130  365951 out.go:358] Setting ErrFile to fd 2...
	I0917 18:02:28.092153  365951 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:02:28.092456  365951 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 18:02:28.092747  365951 out.go:352] Setting JSON to false
	I0917 18:02:28.092824  365951 mustload.go:65] Loading cluster: ha-974981
	I0917 18:02:28.092904  365951 notify.go:220] Checking for updates...
	I0917 18:02:28.093931  365951 config.go:182] Loaded profile config "ha-974981": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 18:02:28.093982  365951 status.go:255] checking status of ha-974981 ...
	I0917 18:02:28.094640  365951 cli_runner.go:164] Run: docker container inspect ha-974981 --format={{.State.Status}}
	I0917 18:02:28.111559  365951 status.go:330] ha-974981 host status = "Stopped" (err=<nil>)
	I0917 18:02:28.111582  365951 status.go:343] host is not running, skipping remaining checks
	I0917 18:02:28.111589  365951 status.go:257] ha-974981 status: &{Name:ha-974981 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 18:02:28.111622  365951 status.go:255] checking status of ha-974981-m02 ...
	I0917 18:02:28.112035  365951 cli_runner.go:164] Run: docker container inspect ha-974981-m02 --format={{.State.Status}}
	I0917 18:02:28.133253  365951 status.go:330] ha-974981-m02 host status = "Stopped" (err=<nil>)
	I0917 18:02:28.133272  365951 status.go:343] host is not running, skipping remaining checks
	I0917 18:02:28.133279  365951 status.go:257] ha-974981-m02 status: &{Name:ha-974981-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 18:02:28.133298  365951 status.go:255] checking status of ha-974981-m04 ...
	I0917 18:02:28.133590  365951 cli_runner.go:164] Run: docker container inspect ha-974981-m04 --format={{.State.Status}}
	I0917 18:02:28.151600  365951 status.go:330] ha-974981-m04 host status = "Stopped" (err=<nil>)
	I0917 18:02:28.151620  365951 status.go:343] host is not running, skipping remaining checks
	I0917 18:02:28.151628  365951 status.go:257] ha-974981-m04 status: &{Name:ha-974981-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (79.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-arm64 start -p ha-974981 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd
ha_test.go:560: (dbg) Done: out/minikube-linux-arm64 start -p ha-974981 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m18.952960404s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (79.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.55s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (42.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-974981 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-arm64 node add -p ha-974981 --control-plane -v=7 --alsologtostderr: (41.374345802s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Done: out/minikube-linux-arm64 -p ha-974981 status -v=7 --alsologtostderr: (1.092237332s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (42.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.83s)

                                                
                                    
x
+
TestJSONOutput/start/Command (80.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-277724 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd
E0917 18:04:41.175105  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:05:08.875943  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:05:43.597876  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-277724 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=containerd: (1m20.350447177s)
--- PASS: TestJSONOutput/start/Command (80.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.99s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-277724 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.99s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-277724 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.79s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-277724 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-277724 --output=json --user=testUser: (5.793304243s)
--- PASS: TestJSONOutput/stop/Command (5.79s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-414281 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-414281 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (76.534041ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"02b831a7-4406-4a42-b878-f7555908e729","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-414281] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"aba3bfbf-bd29-401e-943c-d73343dd6845","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"0007e756-f7e2-4c08-962a-6b09f43e02e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"40b8fbd6-bd5f-46aa-a6c8-fe782d144335","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig"}}
	{"specversion":"1.0","id":"b3c2dc9d-1cc9-4b51-8f1d-8b40883328b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube"}}
	{"specversion":"1.0","id":"a099894b-5e5b-449c-b921-c721eb955780","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"c2dec300-b42e-42cc-b419-43678282cfa4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a34b2e3-df63-4461-b9c4-f90adcbfa3b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-414281" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-414281
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (40.53s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-278380 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-278380 --network=: (38.443214312s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-278380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-278380
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-278380: (2.067177206s)
--- PASS: TestKicCustomNetwork/create_custom_network (40.53s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.43s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-131487 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-131487 --network=bridge: (31.533222703s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-131487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-131487
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-131487: (1.878381169s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.43s)

                                                
                                    
x
+
TestKicExistingNetwork (32.25s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-489690 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-489690 --network=existing-network: (30.073072865s)
helpers_test.go:175: Cleaning up "existing-network-489690" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-489690
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-489690: (2.023386818s)
--- PASS: TestKicExistingNetwork (32.25s)

                                                
                                    
x
+
TestKicCustomSubnet (34.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-511512 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-511512 --subnet=192.168.60.0/24: (32.24883204s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-511512 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-511512" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-511512
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-511512: (2.027064088s)
--- PASS: TestKicCustomSubnet (34.30s)

                                                
                                    
x
+
TestKicStaticIP (32.26s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-899237 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-899237 --static-ip=192.168.200.200: (30.031710086s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-899237 ip
helpers_test.go:175: Cleaning up "static-ip-899237" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-899237
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-899237: (2.07335159s)
--- PASS: TestKicStaticIP (32.26s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (71.27s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-674104 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-674104 --driver=docker  --container-runtime=containerd: (31.566014144s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-677271 --driver=docker  --container-runtime=containerd
E0917 18:09:41.176412  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-677271 --driver=docker  --container-runtime=containerd: (34.376127619s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-674104
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-677271
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-677271" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-677271
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-677271: (2.035504481s)
helpers_test.go:175: Cleaning up "first-674104" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-674104
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-674104: (1.964908068s)
--- PASS: TestMinikubeProfile (71.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-594751 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-594751 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (6.191043742s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-594751 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-596581 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-596581 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (5.051545836s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-596581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-594751 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-594751 --alsologtostderr -v=5: (1.618494458s)
--- PASS: TestMountStart/serial/DeleteFirst (1.62s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-596581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-596581
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-596581: (1.20257632s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.86s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-596581
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-596581: (6.858829063s)
--- PASS: TestMountStart/serial/RestartStopped (7.86s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-596581 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472184 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0917 18:12:06.661376  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-472184 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.926363318s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (19.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-472184 -- rollout status deployment/busybox: (17.416385935s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-2cnbt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-6tths -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-2cnbt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-6tths -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-2cnbt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-6tths -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (19.52s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-2cnbt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-2cnbt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-6tths -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-472184 -- exec busybox-7dff88458-6tths -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.00s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-472184 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-472184 -v 3 --alsologtostderr: (16.162399768s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-472184 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.34s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp testdata/cp-test.txt multinode-472184:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3918140836/001/cp-test_multinode-472184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184:/home/docker/cp-test.txt multinode-472184-m02:/home/docker/cp-test_multinode-472184_multinode-472184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m02 "sudo cat /home/docker/cp-test_multinode-472184_multinode-472184-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184:/home/docker/cp-test.txt multinode-472184-m03:/home/docker/cp-test_multinode-472184_multinode-472184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m03 "sudo cat /home/docker/cp-test_multinode-472184_multinode-472184-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp testdata/cp-test.txt multinode-472184-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3918140836/001/cp-test_multinode-472184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184-m02:/home/docker/cp-test.txt multinode-472184:/home/docker/cp-test_multinode-472184-m02_multinode-472184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184 "sudo cat /home/docker/cp-test_multinode-472184-m02_multinode-472184.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184-m02:/home/docker/cp-test.txt multinode-472184-m03:/home/docker/cp-test_multinode-472184-m02_multinode-472184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m03 "sudo cat /home/docker/cp-test_multinode-472184-m02_multinode-472184-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp testdata/cp-test.txt multinode-472184-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3918140836/001/cp-test_multinode-472184-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184-m03:/home/docker/cp-test.txt multinode-472184:/home/docker/cp-test_multinode-472184-m03_multinode-472184.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184 "sudo cat /home/docker/cp-test_multinode-472184-m03_multinode-472184.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 cp multinode-472184-m03:/home/docker/cp-test.txt multinode-472184-m02:/home/docker/cp-test_multinode-472184-m03_multinode-472184-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 ssh -n multinode-472184-m02 "sudo cat /home/docker/cp-test_multinode-472184-m03_multinode-472184-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.37s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-472184 node stop m03: (1.233636392s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-472184 status: exit status 7 (526.435965ms)

                                                
                                                
-- stdout --
	multinode-472184
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-472184-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-472184-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-472184 status --alsologtostderr: exit status 7 (518.420995ms)

                                                
                                                
-- stdout --
	multinode-472184
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-472184-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-472184-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:13:22.066781  419440 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:13:22.066983  419440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:13:22.066993  419440 out.go:358] Setting ErrFile to fd 2...
	I0917 18:13:22.066999  419440 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:13:22.067278  419440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 18:13:22.067472  419440 out.go:352] Setting JSON to false
	I0917 18:13:22.067512  419440 mustload.go:65] Loading cluster: multinode-472184
	I0917 18:13:22.067587  419440 notify.go:220] Checking for updates...
	I0917 18:13:22.069007  419440 config.go:182] Loaded profile config "multinode-472184": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 18:13:22.069035  419440 status.go:255] checking status of multinode-472184 ...
	I0917 18:13:22.069827  419440 cli_runner.go:164] Run: docker container inspect multinode-472184 --format={{.State.Status}}
	I0917 18:13:22.088807  419440 status.go:330] multinode-472184 host status = "Running" (err=<nil>)
	I0917 18:13:22.088835  419440 host.go:66] Checking if "multinode-472184" exists ...
	I0917 18:13:22.089166  419440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-472184
	I0917 18:13:22.117775  419440 host.go:66] Checking if "multinode-472184" exists ...
	I0917 18:13:22.118142  419440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 18:13:22.118211  419440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472184
	I0917 18:13:22.134717  419440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/multinode-472184/id_rsa Username:docker}
	I0917 18:13:22.237572  419440 ssh_runner.go:195] Run: systemctl --version
	I0917 18:13:22.242018  419440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:13:22.254291  419440 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 18:13:22.307449  419440 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:40 OomKillDisable:true NGoroutines:61 SystemTime:2024-09-17 18:13:22.297910238 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aar
ch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErro
rs:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 18:13:22.308092  419440 kubeconfig.go:125] found "multinode-472184" server: "https://192.168.67.2:8443"
	I0917 18:13:22.308132  419440 api_server.go:166] Checking apiserver status ...
	I0917 18:13:22.308179  419440 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0917 18:13:22.319052  419440 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1445/cgroup
	I0917 18:13:22.328692  419440 api_server.go:182] apiserver freezer: "6:freezer:/docker/4a92603b4a2c8df1114be83675fce71ce4b57c363e263a5e2fe4a4c9110ad5cb/kubepods/burstable/pod9acfa0ee26df654dbf75d1f79219fd82/0eb8d034ecda58f91ac06cf7b45b43cd7adaa7d3fdd32473c27033a12368b0a1"
	I0917 18:13:22.328761  419440 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4a92603b4a2c8df1114be83675fce71ce4b57c363e263a5e2fe4a4c9110ad5cb/kubepods/burstable/pod9acfa0ee26df654dbf75d1f79219fd82/0eb8d034ecda58f91ac06cf7b45b43cd7adaa7d3fdd32473c27033a12368b0a1/freezer.state
	I0917 18:13:22.337414  419440 api_server.go:204] freezer state: "THAWED"
	I0917 18:13:22.337447  419440 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0917 18:13:22.345188  419440 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0917 18:13:22.345222  419440 status.go:422] multinode-472184 apiserver status = Running (err=<nil>)
	I0917 18:13:22.345234  419440 status.go:257] multinode-472184 status: &{Name:multinode-472184 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 18:13:22.345252  419440 status.go:255] checking status of multinode-472184-m02 ...
	I0917 18:13:22.345566  419440 cli_runner.go:164] Run: docker container inspect multinode-472184-m02 --format={{.State.Status}}
	I0917 18:13:22.361640  419440 status.go:330] multinode-472184-m02 host status = "Running" (err=<nil>)
	I0917 18:13:22.361664  419440 host.go:66] Checking if "multinode-472184-m02" exists ...
	I0917 18:13:22.361999  419440 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-472184-m02
	I0917 18:13:22.378371  419440 host.go:66] Checking if "multinode-472184-m02" exists ...
	I0917 18:13:22.378695  419440 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0917 18:13:22.378743  419440 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-472184-m02
	I0917 18:13:22.395353  419440 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/19662-293874/.minikube/machines/multinode-472184-m02/id_rsa Username:docker}
	I0917 18:13:22.492685  419440 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0917 18:13:22.504262  419440 status.go:257] multinode-472184-m02 status: &{Name:multinode-472184-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0917 18:13:22.504298  419440 status.go:255] checking status of multinode-472184-m03 ...
	I0917 18:13:22.504615  419440 cli_runner.go:164] Run: docker container inspect multinode-472184-m03 --format={{.State.Status}}
	I0917 18:13:22.522616  419440 status.go:330] multinode-472184-m03 host status = "Stopped" (err=<nil>)
	I0917 18:13:22.522642  419440 status.go:343] host is not running, skipping remaining checks
	I0917 18:13:22.522649  419440 status.go:257] multinode-472184-m03 status: &{Name:multinode-472184-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-472184 node start m03 -v=7 --alsologtostderr: (9.169353632s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.94s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (102.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-472184
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-472184
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-472184: (25.066794496s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472184 --wait=true -v=8 --alsologtostderr
E0917 18:14:41.174624  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-472184 --wait=true -v=8 --alsologtostderr: (1m16.944622771s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-472184
--- PASS: TestMultiNode/serial/RestartKeepsNodes (102.13s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-472184 node delete m03: (5.014038161s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.73s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 stop
E0917 18:15:43.598321  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-472184 stop: (23.842970447s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-472184 status: exit status 7 (83.150602ms)

                                                
                                                
-- stdout --
	multinode-472184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-472184-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-472184 status --alsologtostderr: exit status 7 (83.563343ms)

                                                
                                                
-- stdout --
	multinode-472184
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-472184-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:15:44.303944  427929 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:15:44.304139  427929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:15:44.304165  427929 out.go:358] Setting ErrFile to fd 2...
	I0917 18:15:44.304185  427929 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:15:44.304463  427929 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 18:15:44.304692  427929 out.go:352] Setting JSON to false
	I0917 18:15:44.304753  427929 mustload.go:65] Loading cluster: multinode-472184
	I0917 18:15:44.304835  427929 notify.go:220] Checking for updates...
	I0917 18:15:44.305868  427929 config.go:182] Loaded profile config "multinode-472184": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 18:15:44.305907  427929 status.go:255] checking status of multinode-472184 ...
	I0917 18:15:44.306544  427929 cli_runner.go:164] Run: docker container inspect multinode-472184 --format={{.State.Status}}
	I0917 18:15:44.323139  427929 status.go:330] multinode-472184 host status = "Stopped" (err=<nil>)
	I0917 18:15:44.323156  427929 status.go:343] host is not running, skipping remaining checks
	I0917 18:15:44.323163  427929 status.go:257] multinode-472184 status: &{Name:multinode-472184 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0917 18:15:44.323201  427929 status.go:255] checking status of multinode-472184-m02 ...
	I0917 18:15:44.323509  427929 cli_runner.go:164] Run: docker container inspect multinode-472184-m02 --format={{.State.Status}}
	I0917 18:15:44.343054  427929 status.go:330] multinode-472184-m02 host status = "Stopped" (err=<nil>)
	I0917 18:15:44.343072  427929 status.go:343] host is not running, skipping remaining checks
	I0917 18:15:44.343079  427929 status.go:257] multinode-472184-m02 status: &{Name:multinode-472184-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.01s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472184 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0917 18:16:04.237446  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-472184 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (48.871020357s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-472184 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (34.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-472184
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472184-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-472184-m02 --driver=docker  --container-runtime=containerd: exit status 14 (90.479767ms)

                                                
                                                
-- stdout --
	* [multinode-472184-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-472184-m02' is duplicated with machine name 'multinode-472184-m02' in profile 'multinode-472184'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-472184-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-472184-m03 --driver=docker  --container-runtime=containerd: (32.17480627s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-472184
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-472184: exit status 80 (348.786129ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-472184 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-472184-m03 already exists in multinode-472184-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-472184-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-472184-m03: (2.012125772s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (34.68s)

                                                
                                    
x
+
TestPreload (121.29s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-164999 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-164999 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.24.4: (1m23.807354503s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-164999 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-164999 image pull gcr.io/k8s-minikube/busybox: (2.019152532s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-164999
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-164999: (12.053937756s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-164999 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-164999 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (20.474551851s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-164999 image list
helpers_test.go:175: Cleaning up "test-preload-164999" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-164999
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-164999: (2.585771171s)
--- PASS: TestPreload (121.29s)

                                                
                                    
x
+
TestScheduledStopUnix (108.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-814421 --memory=2048 --driver=docker  --container-runtime=containerd
E0917 18:19:41.175834  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-814421 --memory=2048 --driver=docker  --container-runtime=containerd: (31.803005755s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-814421 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-814421 -n scheduled-stop-814421
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-814421 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-814421 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-814421 -n scheduled-stop-814421
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-814421
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-814421 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0917 18:20:43.598329  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-814421
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-814421: exit status 7 (69.732652ms)

                                                
                                                
-- stdout --
	scheduled-stop-814421
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-814421 -n scheduled-stop-814421
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-814421 -n scheduled-stop-814421: exit status 7 (74.79501ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-814421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-814421
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-814421: (5.111275087s)
--- PASS: TestScheduledStopUnix (108.51s)

                                                
                                    
x
+
TestInsufficientStorage (10.4s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-546476 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-546476 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.926608557s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2ae82c13-f830-4e01-a281-208349cd96ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-546476] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a5a7f1d-96ef-4ee6-98d8-a4b2c51cc01b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19662"}}
	{"specversion":"1.0","id":"eb771c65-3613-4cb9-9fb0-a59034a8c366","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed0fea06-af92-4071-ab35-c657b4807a52","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig"}}
	{"specversion":"1.0","id":"d2f81469-fe92-45d3-8203-202e3f91d4d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube"}}
	{"specversion":"1.0","id":"e1d7c48a-e37e-4757-a834-ba8e0aede056","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"09f8209d-874b-4bab-a16a-83a874861dd5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8bdac604-4e44-4f57-8969-cdac2836d194","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ab850349-969e-4474-bd1d-f1f8ef846451","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"6f6f8061-520f-4713-9681-15ba1b3640bc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"b2fbbe14-4767-4edf-a69b-0ee41ef78569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"917fd291-4f30-4cc8-ade0-7b4a7785204b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-546476\" primary control-plane node in \"insufficient-storage-546476\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"b54ded4f-27a7-4dcc-ac2b-460f99f867d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726589491-19662 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"93ff032b-e571-4773-8334-79d2f202ea31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"91a4f6c9-bf2b-4538-a791-1436d5909735","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-546476 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-546476 --output=json --layout=cluster: exit status 7 (291.050969ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-546476","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-546476","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:21:10.562565  446593 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-546476" does not appear in /home/jenkins/minikube-integration/19662-293874/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-546476 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-546476 --output=json --layout=cluster: exit status 7 (304.146461ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-546476","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-546476","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0917 18:21:10.869687  446653 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-546476" does not appear in /home/jenkins/minikube-integration/19662-293874/kubeconfig
	E0917 18:21:10.880440  446653 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/insufficient-storage-546476/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-546476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-546476
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-546476: (1.874920328s)
--- PASS: TestInsufficientStorage (10.40s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (81.98s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.844352753 start -p running-upgrade-856635 --memory=2200 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.844352753 start -p running-upgrade-856635 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.039065357s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-856635 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-856635 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (32.746086522s)
helpers_test.go:175: Cleaning up "running-upgrade-856635" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-856635
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-856635: (2.127996633s)
--- PASS: TestRunningBinaryUpgrade (81.98s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.86s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-001490 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-001490 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m1.586774811s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-001490
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-001490: (1.247761583s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-001490 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-001490 status --format={{.Host}}: exit status 7 (66.395921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-001490 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-001490 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m39.886194221s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-001490 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-001490 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-001490 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=containerd: exit status 106 (113.849767ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-001490] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-001490
	    minikube start -p kubernetes-upgrade-001490 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0014902 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-001490 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-001490 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-001490 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.551681111s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-001490" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-001490
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-001490: (2.216139969s)
--- PASS: TestKubernetesUpgrade (352.86s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.22s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.2772406843 start -p missing-upgrade-202000 --memory=2200 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.2772406843 start -p missing-upgrade-202000 --memory=2200 --driver=docker  --container-runtime=containerd: (1m36.336395168s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-202000
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-202000
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-202000 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-202000 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m17.083243049s)
helpers_test.go:175: Cleaning up "missing-upgrade-202000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-202000
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-202000: (2.302415771s)
--- PASS: TestMissingContainerUpgrade (177.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-978390 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-978390 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=containerd: exit status 14 (92.185834ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-978390] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-978390 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-978390 --driver=docker  --container-runtime=containerd: (38.24366799s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-978390 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.95s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-978390 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-978390 --no-kubernetes --driver=docker  --container-runtime=containerd: (15.700836567s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-978390 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-978390 status -o json: exit status 2 (287.78277ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-978390","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-978390
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-978390: (1.956262443s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-978390 --no-kubernetes --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-978390 --no-kubernetes --driver=docker  --container-runtime=containerd: (8.101710755s)
--- PASS: TestNoKubernetes/serial/Start (8.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-978390 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-978390 "sudo systemctl is-active --quiet service kubelet": exit status 1 (270.02446ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.89s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.89s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-978390
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-978390: (1.216790307s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-978390 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-978390 --driver=docker  --container-runtime=containerd: (7.76997643s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-978390 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-978390 "sudo systemctl is-active --quiet service kubelet": exit status 1 (343.490819ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.73s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (121.23s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3212019751 start -p stopped-upgrade-774073 --memory=2200 --vm-driver=docker  --container-runtime=containerd
E0917 18:24:41.175621  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3212019751 start -p stopped-upgrade-774073 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (46.371214415s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3212019751 -p stopped-upgrade-774073 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3212019751 -p stopped-upgrade-774073 stop: (20.000699595s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-774073 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0917 18:25:43.598231  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-774073 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (54.854276421s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (121.23s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-774073
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-774073: (1.150575482s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.15s)

                                                
                                    
x
+
TestPause/serial/Start (92.73s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-716000 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-716000 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m32.727204027s)
--- PASS: TestPause/serial/Start (92.73s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-335096 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-335096 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (180.696937ms)

                                                
                                                
-- stdout --
	* [false-335096] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=19662
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0917 18:28:59.629327  485500 out.go:345] Setting OutFile to fd 1 ...
	I0917 18:28:59.629457  485500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:28:59.629466  485500 out.go:358] Setting ErrFile to fd 2...
	I0917 18:28:59.629474  485500 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0917 18:28:59.629706  485500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19662-293874/.minikube/bin
	I0917 18:28:59.630187  485500 out.go:352] Setting JSON to false
	I0917 18:28:59.631167  485500 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":7886,"bootTime":1726589854,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1070-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I0917 18:28:59.631255  485500 start.go:139] virtualization:  
	I0917 18:28:59.633932  485500 out.go:177] * [false-335096] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I0917 18:28:59.636631  485500 out.go:177]   - MINIKUBE_LOCATION=19662
	I0917 18:28:59.636764  485500 notify.go:220] Checking for updates...
	I0917 18:28:59.640728  485500 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0917 18:28:59.642726  485500 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19662-293874/kubeconfig
	I0917 18:28:59.644376  485500 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19662-293874/.minikube
	I0917 18:28:59.646220  485500 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0917 18:28:59.647889  485500 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0917 18:28:59.650660  485500 config.go:182] Loaded profile config "pause-716000": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.31.1
	I0917 18:28:59.650748  485500 driver.go:394] Setting default libvirt URI to qemu:///system
	I0917 18:28:59.671389  485500 docker.go:123] docker version: linux-27.2.1:Docker Engine - Community
	I0917 18:28:59.671522  485500 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0917 18:28:59.744523  485500 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-17 18:28:59.73480786 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1070-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aarc
h64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214839296 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.2.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerError
s:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.16.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.2]] Warnings:<nil>}}
	I0917 18:28:59.744637  485500 docker.go:318] overlay module found
	I0917 18:28:59.746579  485500 out.go:177] * Using the docker driver based on user configuration
	I0917 18:28:59.748815  485500 start.go:297] selected driver: docker
	I0917 18:28:59.748842  485500 start.go:901] validating driver "docker" against <nil>
	I0917 18:28:59.748856  485500 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0917 18:28:59.751645  485500 out.go:201] 
	W0917 18:28:59.753473  485500 out.go:270] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I0917 18:28:59.755720  485500 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-335096 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-335096" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19662-293874/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-716000
contexts:
- context:
cluster: pause-716000
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-716000
name: pause-716000
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-716000
user:
client-certificate: /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/pause-716000/client.crt
client-key: /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/pause-716000/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-335096

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-335096"

                                                
                                                
----------------------- debugLogs end: false-335096 [took: 3.532539559s] --------------------------------
helpers_test.go:175: Cleaning up "false-335096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-335096
--- PASS: TestNetworkPlugins/group/false (3.86s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.71s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-716000 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-716000 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.689260884s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.71s)

                                                
                                    
x
+
TestPause/serial/Pause (0.98s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-716000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.98s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-716000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-716000 --output=json --layout=cluster: exit status 2 (450.784962ms)

                                                
                                                
-- stdout --
	{"Name":"pause-716000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-716000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.91s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-716000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.91s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.16s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-716000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-716000 --alsologtostderr -v=5: (1.163800757s)
--- PASS: TestPause/serial/PauseAgain (1.16s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-716000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-716000 --alsologtostderr -v=5: (2.753967868s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-716000
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-716000: exit status 1 (18.363548ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-716000: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (147.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-603581 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
E0917 18:30:43.598277  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:32:44.239430  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-603581 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (2m27.889048991s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (147.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-603581 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [acfb4172-a9dc-4612-803d-6a9aa21e373a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [acfb4172-a9dc-4612-803d-6a9aa21e373a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003291798s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-603581 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-603581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-603581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021419184s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-603581 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-603581 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-603581 --alsologtostderr -v=3: (12.760004023s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-168407 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-168407 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m16.500782351s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.50s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-603581 -n old-k8s-version-603581
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-603581 -n old-k8s-version-603581: exit status 7 (161.012363ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-603581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (308.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-603581 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-603581 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.0: (5m8.531502096s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-603581 -n old-k8s-version-603581
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (308.95s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-168407 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6e7afa2f-596d-489d-bd69-e1a3088c5210] Pending
helpers_test.go:344: "busybox" [6e7afa2f-596d-489d-bd69-e1a3088c5210] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6e7afa2f-596d-489d-bd69-e1a3088c5210] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004374632s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-168407 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-168407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-168407 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.120266631s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-168407 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-168407 --alsologtostderr -v=3
E0917 18:34:41.175089  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-168407 --alsologtostderr -v=3: (12.173116926s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-168407 -n no-preload-168407
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-168407 -n no-preload-168407: exit status 7 (77.535584ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-168407 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (268.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-168407 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0917 18:35:43.598349  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-168407 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m27.843247035s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-168407 -n no-preload-168407
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (268.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wxfgr" [eb0e1066-b7c6-4ac6-a018-59ad07afa21c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004208078s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-wxfgr" [eb0e1066-b7c6-4ac6-a018-59ad07afa21c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005200016s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-603581 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-603581 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-603581 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-603581 -n old-k8s-version-603581
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-603581 -n old-k8s-version-603581: exit status 2 (326.368684ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-603581 -n old-k8s-version-603581
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-603581 -n old-k8s-version-603581: exit status 2 (361.118697ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-603581 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-603581 -n old-k8s-version-603581
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-603581 -n old-k8s-version-603581
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (93.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-385792 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-385792 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m33.320685197s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (93.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xkg9g" [027e8b85-0e1a-42b4-8ca3-f2790eb8a000] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003057948s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-xkg9g" [027e8b85-0e1a-42b4-8ca3-f2790eb8a000] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004483029s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-168407 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-168407 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-168407 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-168407 -n no-preload-168407
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-168407 -n no-preload-168407: exit status 2 (330.434778ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-168407 -n no-preload-168407
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-168407 -n no-preload-168407: exit status 2 (330.468091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-168407 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-168407 -n no-preload-168407
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-168407 -n no-preload-168407
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-131370 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0917 18:39:41.175159  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-131370 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (1m22.271061554s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (82.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-385792 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4de6b305-dbe6-4789-8673-aa824cf1d65e] Pending
helpers_test.go:344: "busybox" [4de6b305-dbe6-4789-8673-aa824cf1d65e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [4de6b305-dbe6-4789-8673-aa824cf1d65e] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.00465967s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-385792 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-385792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-385792 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.107003405s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-385792 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-385792 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-385792 --alsologtostderr -v=3: (12.092289824s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-385792 -n embed-certs-385792
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-385792 -n embed-certs-385792: exit status 7 (75.450949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-385792 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (281.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-385792 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0917 18:40:43.598517  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-385792 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m40.896759538s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-385792 -n embed-certs-385792
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (281.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-131370 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b1634411-343c-47d1-9c3c-283cf1ad0aaa] Pending
helpers_test.go:344: "busybox" [b1634411-343c-47d1-9c3c-283cf1ad0aaa] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b1634411-343c-47d1-9c3c-283cf1ad0aaa] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003661517s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-131370 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-131370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-131370 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022886805s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-131370 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-131370 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-131370 --alsologtostderr -v=3: (12.119236296s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370: exit status 7 (72.803342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-131370 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.95s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-131370 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0917 18:42:51.704734  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:51.711305  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:51.722641  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:51.744131  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:51.785604  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:51.867128  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:52.028649  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:52.350507  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:52.992007  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:54.273896  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:42:56.835538  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:43:01.957250  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:43:12.198679  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:43:32.680242  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:13.642468  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:28.422065  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:28.428460  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:28.439929  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:28.461310  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:28.502750  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:28.584299  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:28.745880  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:29.067604  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:29.709273  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:30.990886  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:33.552199  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:38.674371  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:41.175196  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:44:48.916624  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:45:09.397987  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-131370 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (4m29.510388423s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (269.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fzxbh" [ed58de3b-4662-4b15-9af1-3d7f51804ea7] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004925419s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fzxbh" [ed58de3b-4662-4b15-9af1-3d7f51804ea7] Running
E0917 18:45:26.666073  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005298868s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-385792 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-385792 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-385792 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-385792 -n embed-certs-385792
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-385792 -n embed-certs-385792: exit status 2 (347.397397ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-385792 -n embed-certs-385792
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-385792 -n embed-certs-385792: exit status 2 (356.181806ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-385792 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-385792 -n embed-certs-385792
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-385792 -n embed-certs-385792
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-875158 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
E0917 18:45:43.598184  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:45:50.360158  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-875158 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (37.511476302s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.51s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-whfnf" [9700e7bc-ff2c-4f0e-9d9d-8f8fa5821f38] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004731996s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-whfnf" [9700e7bc-ff2c-4f0e-9d9d-8f8fa5821f38] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003695323s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-131370 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-131370 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-131370 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-131370 --alsologtostderr -v=1: (1.010677903s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370: exit status 2 (361.956396ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370: exit status 2 (401.43047ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-131370 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-131370 --alsologtostderr -v=1: (1.069224608s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-131370 -n default-k8s-diff-port-131370
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (99.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m39.267566951s)
--- PASS: TestNetworkPlugins/group/auto/Start (99.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-875158 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-875158 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.740573514s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-875158 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-875158 --alsologtostderr -v=3: (3.373779581s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-875158 -n newest-cni-875158
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-875158 -n newest-cni-875158: exit status 7 (106.048606ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-875158 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (24.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-875158 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-875158 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.31.1: (23.7107726s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-875158 -n newest-cni-875158
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (24.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-875158 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-875158 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-875158 -n newest-cni-875158
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-875158 -n newest-cni-875158: exit status 2 (532.66535ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-875158 -n newest-cni-875158
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-875158 -n newest-cni-875158: exit status 2 (496.513695ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-875158 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-875158 -n newest-cni-875158
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-875158 -n newest-cni-875158
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.95s)
E0917 18:52:21.645634  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
E0917 18:47:12.281513  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (53.861456423s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-2s947" [4ded13dd-2c46-487d-98e8-57617ee3d00d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003653395s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-335096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-335096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vk7h6" [c95037aa-1da6-4e2f-b971-f5ec1a286c4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vk7h6" [c95037aa-1da6-4e2f-b971-f5ec1a286c4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004676414s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-335096 "pgrep -a kubelet"
E0917 18:47:51.704912  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-335096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vvpdm" [a0139c21-6072-472c-a8ef-23d927bec6d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vvpdm" [a0139c21-6072-472c-a8ef-23d927bec6d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.03353444s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-335096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-335096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (70.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m10.923083565s)
--- PASS: TestNetworkPlugins/group/calico/Start (70.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (57.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E0917 18:49:24.241477  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (57.664107958s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (57.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-335096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-335096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5c46v" [82a863ff-3e95-4ea5-a01f-76a4d97e68d4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 18:49:28.422414  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/no-preload-168407/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-5c46v" [82a863ff-3e95-4ea5-a01f-76a4d97e68d4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004347474s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-335096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-b7zwh" [5a87619f-e794-4f88-943a-1d2a76e73494] Running
E0917 18:49:41.175113  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/functional-566937/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00846432s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-335096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-335096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-df8zx" [032b5eda-3095-44ac-9d64-997da5d06160] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-df8zx" [032b5eda-3095-44ac-9d64-997da5d06160] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.007735293s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-335096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (53.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (53.572714s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (53.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (54.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E0917 18:50:43.597771  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/addons-029117/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (54.514644641s)
--- PASS: TestNetworkPlugins/group/flannel/Start (54.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-335096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-335096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p79g6" [a3fe668e-b869-474a-abf1-aa097c3db793] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p79g6" [a3fe668e-b869-474a-abf1-aa097c3db793] Running
E0917 18:50:59.691943  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:50:59.698465  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:50:59.709957  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:50:59.731340  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:50:59.772681  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:50:59.854136  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:51:00.020546  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:51:00.342667  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:51:00.985422  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:51:02.266727  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.003786524s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-335096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-t9csp" [110fbe53-2f4c-4468-9d3b-4675a04dda9f] Running
E0917 18:51:20.195550  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/default-k8s-diff-port-131370/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004206937s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-335096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-335096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rh2gz" [977258f8-c203-4214-beee-c2d93ca036ce] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rh2gz" [977258f8-c203-4214-beee-c2d93ca036ce] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003712418s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (77.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-335096 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m17.6199523s)
--- PASS: TestNetworkPlugins/group/bridge/Start (77.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-335096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-335096 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-335096 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2rvvc" [3007df5f-1122-4d38-946a-6e4b94a2c29d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0917 18:52:44.366111  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:44.372790  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:44.384624  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:44.406157  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:44.447706  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:44.529148  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:44.690762  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:45.026786  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:45.668474  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:46.950344  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2rvvc" [3007df5f-1122-4d38-946a-6e4b94a2c29d] Running
E0917 18:52:49.512376  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/kindnet-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:51.704986  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/old-k8s-version-603581/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:52.029763  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:52.036293  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:52.047718  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:52.069303  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:52.110924  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:52.192419  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:52.354110  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:52.675818  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
E0917 18:52:53.317329  299255 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/auto-335096/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003895936s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-335096 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-335096 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (28/328)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.54s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-569909 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-569909" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-569909
--- SKIP: TestDownloadOnlyKic (0.54s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (0s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:446: skip Helm test on arm64
--- SKIP: TestAddons/parallel/HelmTiller (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-905609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-905609
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-335096 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-335096" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19662-293874/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-716000
contexts:
- context:
cluster: pause-716000
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-716000
name: pause-716000
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-716000
user:
client-certificate: /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/pause-716000/client.crt
client-key: /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/pause-716000/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-335096

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-335096"

                                                
                                                
----------------------- debugLogs end: kubenet-335096 [took: 3.28274824s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-335096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-335096
--- SKIP: TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-335096 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-335096" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/19662-293874/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-716000
contexts:
- context:
cluster: pause-716000
extensions:
- extension:
last-update: Tue, 17 Sep 2024 18:28:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: pause-716000
name: pause-716000
current-context: ""
kind: Config
preferences: {}
users:
- name: pause-716000
user:
client-certificate: /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/pause-716000/client.crt
client-key: /home/jenkins/minikube-integration/19662-293874/.minikube/profiles/pause-716000/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-335096

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-335096" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-335096"

                                                
                                                
----------------------- debugLogs end: cilium-335096 [took: 3.665306502s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-335096" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-335096
--- SKIP: TestNetworkPlugins/group/cilium (3.87s)

                                                
                                    
Copied to clipboard