3 3 9 2 7 7 7 7 4 2 2 7 4 1 3 7 12 1 1 1 9 12 9 1 8 8 2 7 7 11 12 1 1 11 11 1 9 11 2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 | // SPDX-License-Identifier: GPL-2.0-or-later /* * HID driver for Kye/Genius devices not fully compliant with HID standard * * Copyright (c) 2009 Jiri Kosina * Copyright (c) 2009 Tomas Hanak * Copyright (c) 2012 Nikolai Kondrashov * Copyright (c) 2023 David Yang */ #include <linux/unaligned.h> #include <linux/device.h> #include <linux/hid.h> #include <linux/module.h> #include "hid-ids.h" /* Data gathered from Database/VID0458_PID????/Vista/TBoard/default.xml in ioTablet driver * * TODO: * - Add battery and sleep support for EasyPen M406W and MousePen M508WX * - Investigate ScrollZ.MiceFMT buttons of EasyPen M406 */ static const __u8 easypen_m406_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x0A, 0x45, 0x02, /* Usage (AC Rotate), */ 0x09, 0x40, /* Usage (Menu), */ 0x0A, 0x2F, 0x02, /* Usage (AC Zoom), */ 0x0A, 0x46, 0x02, /* Usage (AC Resize), */ 0x0A, 0x1A, 0x02, /* Usage (AC Undo), */ 0x0A, 0x6A, 0x02, /* Usage (AC Delete), */ 0x0A, 0x24, 0x02, /* Usage (AC Back), */ 0x0A, 0x25, 0x02, /* Usage (AC Forward), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x08, /* Report Count (8), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x30, /* Report Count (48), */ 0x81, 0x01, /* Input (Constant), */ 0xC0 /* End Collection */ }; static const __u8 easypen_m506_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x0A, 0x6A, 0x02, /* Usage (AC Delete), */ 0x0A, 0x1A, 0x02, /* Usage (AC Undo), */ 0x0A, 0x2D, 0x02, /* Usage (AC Zoom In), */ 0x0A, 0x2E, 0x02, /* Usage (AC Zoom Out), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x04, /* Report Count (4), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x34, /* Report Count (52), */ 0x81, 0x01, /* Input (Constant), */ 0xC0 /* End Collection */ }; static const __u8 easypen_m406w_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x0A, 0x6A, 0x02, /* Usage (AC Delete), */ 0x0A, 0x1A, 0x02, /* Usage (AC Undo), */ 0x0A, 0x01, 0x02, /* Usage (AC New), */ 0x09, 0x40, /* Usage (Menu), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x04, /* Report Count (4), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x34, /* Report Count (52), */ 0x81, 0x01, /* Input (Constant), */ 0xC0 /* End Collection */ }; static const __u8 easypen_m610x_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x0A, 0x1A, 0x02, /* Usage (AC Undo), */ 0x0A, 0x79, 0x02, /* Usage (AC Redo Or Repeat), */ 0x0A, 0x2D, 0x02, /* Usage (AC Zoom In), */ 0x0A, 0x2E, 0x02, /* Usage (AC Zoom Out), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x04, /* Report Count (4), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x34, /* Report Count (52), */ 0x81, 0x01, /* Input (Constant), */ 0xC0 /* End Collection */ }; static const __u8 pensketch_m912_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x08, /* Report Count (8), */ 0x05, 0x0C, /* Usage Page (Consumer), */ 0x0A, 0x6A, 0x02, /* Usage (AC Delete), */ 0x0A, 0x1A, 0x02, /* Usage (AC Undo), */ 0x0A, 0x01, 0x02, /* Usage (AC New), */ 0x0A, 0x2F, 0x02, /* Usage (AC Zoom), */ 0x0A, 0x25, 0x02, /* Usage (AC Forward), */ 0x0A, 0x24, 0x02, /* Usage (AC Back), */ 0x0A, 0x2D, 0x02, /* Usage (AC Zoom In), */ 0x0A, 0x2E, 0x02, /* Usage (AC Zoom Out), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x30, /* Report Count (48), */ 0x81, 0x03, /* Input (Constant, Variable), */ 0xC0 /* End Collection */ }; static const __u8 mousepen_m508wx_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x0A, 0x1A, 0x02, /* Usage (AC Undo), */ 0x0A, 0x6A, 0x02, /* Usage (AC Delete), */ 0x0A, 0x2D, 0x02, /* Usage (AC Zoom In), */ 0x0A, 0x2E, 0x02, /* Usage (AC Zoom Out), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x04, /* Report Count (4), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x34, /* Report Count (52), */ 0x81, 0x01, /* Input (Constant), */ 0xC0 /* End Collection */ }; static const __u8 mousepen_m508x_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x0A, 0x01, 0x02, /* Usage (AC New), */ 0x09, 0x40, /* Usage (Menu), */ 0x0A, 0x6A, 0x02, /* Usage (AC Delete), */ 0x0A, 0x1A, 0x02, /* Usage (AC Undo), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x04, /* Report Count (4), */ 0x81, 0x02, /* Input (Variable), */ 0x81, 0x01, /* Input (Constant), */ 0x15, 0xFF, /* Logical Minimum (-1), */ 0x95, 0x10, /* Report Count (16), */ 0x81, 0x01, /* Input (Constant), */ 0x0A, 0x35, 0x02, /* Usage (AC Scroll), */ 0x0A, 0x2F, 0x02, /* Usage (AC Zoom), */ 0x0A, 0x38, 0x02, /* Usage (AC Pan), */ 0x75, 0x08, /* Report Size (8), */ 0x95, 0x03, /* Report Count (3), */ 0x81, 0x06, /* Input (Variable, Relative), */ 0x95, 0x01, /* Report Count (1), */ 0x81, 0x01, /* Input (Constant), */ 0xC0 /* End Collection */ }; static const __u8 easypen_m406xe_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x04, /* Report Count (4), */ 0x0A, 0x79, 0x02, /* Usage (AC Redo Or Repeat), */ 0x0A, 0x1A, 0x02, /* Usage (AC Undo), */ 0x0A, 0x2D, 0x02, /* Usage (AC Zoom In), */ 0x0A, 0x2E, 0x02, /* Usage (AC Zoom Out), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x34, /* Report Count (52), */ 0x81, 0x03, /* Input (Constant, Variable), */ 0xC0 /* End Collection */ }; static const __u8 pensketch_t609a_control_rdesc[] = { 0x05, 0x0C, /* Usage Page (Consumer), */ 0x09, 0x01, /* Usage (Consumer Control), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x12, /* Report ID (18), */ 0x0A, 0x6A, 0x02, /* Usage (AC Delete), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x08, /* Report Count (8), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x37, /* Report Count (55), */ 0x81, 0x01, /* Input (Constant), */ 0xC0 /* End Collection */ }; /* Fix indexes in kye_tablet_fixup() if you change this */ static const __u8 kye_tablet_rdesc[] = { 0x06, 0x00, 0xFF, /* Usage Page (FF00h), */ 0x09, 0x01, /* Usage (01h), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x05, /* Report ID (5), */ 0x09, 0x01, /* Usage (01h), */ 0x15, 0x81, /* Logical Minimum (-127), */ 0x25, 0x7F, /* Logical Maximum (127), */ 0x75, 0x08, /* Report Size (8), */ 0x95, 0x07, /* Report Count (7), */ 0xB1, 0x02, /* Feature (Variable), */ 0xC0, /* End Collection, */ 0x05, 0x0D, /* Usage Page (Digitizer), */ 0x09, 0x01, /* Usage (Digitizer), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x10, /* Report ID (16), */ 0x09, 0x20, /* Usage (Stylus), */ 0xA0, /* Collection (Physical), */ 0x09, 0x42, /* Usage (Tip Switch), */ 0x09, 0x44, /* Usage (Barrel Switch), */ 0x09, 0x46, /* Usage (Tablet Pick), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x03, /* Report Count (3), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x04, /* Report Count (4), */ 0x81, 0x01, /* Input (Constant), */ 0x09, 0x32, /* Usage (In Range), */ 0x95, 0x01, /* Report Count (1), */ 0x81, 0x02, /* Input (Variable), */ 0x75, 0x10, /* Report Size (16), */ 0xA4, /* Push, */ 0x05, 0x01, /* Usage Page (Desktop), */ 0x09, 0x30, /* Usage (X), */ 0x27, 0xFF, 0x7F, 0x00, 0x00, /* Logical Maximum (32767), */ 0x34, /* Physical Minimum (0), */ 0x47, 0x00, 0x00, 0x00, 0x00, /* Physical Maximum (0), */ 0x65, 0x11, /* Unit (Centimeter), */ 0x55, 0x00, /* Unit Exponent (0), */ 0x75, 0x10, /* Report Size (16), */ 0x81, 0x02, /* Input (Variable), */ 0x09, 0x31, /* Usage (Y), */ 0x27, 0xFF, 0x7F, 0x00, 0x00, /* Logical Maximum (32767), */ 0x47, 0x00, 0x00, 0x00, 0x00, /* Physical Maximum (0), */ 0x81, 0x02, /* Input (Variable), */ 0xB4, /* Pop, */ 0x05, 0x0D, /* Usage Page (Digitizer), */ 0x09, 0x30, /* Usage (Tip Pressure), */ 0x27, 0xFF, 0x07, 0x00, 0x00, /* Logical Maximum (2047), */ 0x81, 0x02, /* Input (Variable), */ 0xC0, /* End Collection, */ 0xC0 /* End Collection, */ }; /* Fix indexes in kye_tablet_fixup() if you change this */ static const __u8 kye_tablet_mouse_rdesc[] = { 0x05, 0x01, /* Usage Page (Desktop), */ 0x09, 0x02, /* Usage (Mouse), */ 0xA1, 0x01, /* Collection (Application), */ 0x85, 0x11, /* Report ID (17), */ 0x09, 0x01, /* Usage (Pointer), */ 0xA0, /* Collection (Physical), */ 0x05, 0x09, /* Usage Page (Button), */ 0x19, 0x01, /* Usage Minimum (01h), */ 0x29, 0x03, /* Usage Maximum (03h), */ 0x14, /* Logical Minimum (0), */ 0x25, 0x01, /* Logical Maximum (1), */ 0x75, 0x01, /* Report Size (1), */ 0x95, 0x03, /* Report Count (3), */ 0x81, 0x02, /* Input (Variable), */ 0x95, 0x04, /* Report Count (4), */ 0x81, 0x01, /* Input (Constant), */ 0x05, 0x0D, /* Usage Page (Digitizer), */ 0x09, 0x37, /* Usage (Data Valid), */ 0x95, 0x01, /* Report Count (1), */ 0x81, 0x02, /* Input (Variable), */ 0x05, 0x01, /* Usage Page (Desktop), */ 0xA4, /* Push, */ 0x09, 0x30, /* Usage (X), */ 0x27, 0xFF, 0x7F, 0x00, 0x00, /* Logical Maximum (32767), */ 0x34, /* Physical Minimum (0), */ 0x47, 0x00, 0x00, 0x00, 0x00, /* Physical Maximum (0), */ 0x65, 0x11, /* Unit (Centimeter), */ 0x55, 0x00, /* Unit Exponent (0), */ 0x75, 0x10, /* Report Size (16), */ 0x81, 0x02, /* Input (Variable), */ 0x09, 0x31, /* Usage (Y), */ 0x27, 0xFF, 0x7F, 0x00, 0x00, /* Logical Maximum (32767), */ 0x47, 0x00, 0x00, 0x00, 0x00, /* Physical Maximum (0), */ 0x81, 0x02, /* Input (Variable), */ 0xB4, /* Pop, */ 0x09, 0x38, /* Usage (Wheel), */ 0x15, 0xFF, /* Logical Minimum (-1), */ 0x75, 0x08, /* Report Size (8), */ 0x95, 0x01, /* Report Count (1), */ 0x81, 0x06, /* Input (Variable, Relative), */ 0x81, 0x01, /* Input (Constant), */ 0xC0, /* End Collection, */ 0xC0 /* End Collection */ }; static const struct kye_tablet_info { __u32 product; __s32 x_logical_maximum; __s32 y_logical_maximum; __s32 pressure_logical_maximum; __s32 x_physical_maximum; __s32 y_physical_maximum; __s8 unit_exponent; __s8 unit; bool has_mouse; unsigned int control_rsize; const __u8 *control_rdesc; } kye_tablets_info[] = { {USB_DEVICE_ID_KYE_EASYPEN_M406, /* 0x5005 */ 15360, 10240, 1023, 6, 4, 0, 0x13, false, sizeof(easypen_m406_control_rdesc), easypen_m406_control_rdesc}, {USB_DEVICE_ID_KYE_EASYPEN_M506, /* 0x500F */ 24576, 20480, 1023, 6, 5, 0, 0x13, false, sizeof(easypen_m506_control_rdesc), easypen_m506_control_rdesc}, {USB_DEVICE_ID_KYE_EASYPEN_I405X, /* 0x5010 */ 14080, 10240, 1023, 55, 40, -1, 0x13, false}, {USB_DEVICE_ID_KYE_MOUSEPEN_I608X, /* 0x5011 */ 20480, 15360, 2047, 8, 6, 0, 0x13, true}, {USB_DEVICE_ID_KYE_EASYPEN_M406W, /* 0x5012 */ 15360, 10240, 1023, 6, 4, 0, 0x13, false, sizeof(easypen_m406w_control_rdesc), easypen_m406w_control_rdesc}, {USB_DEVICE_ID_KYE_EASYPEN_M610X, /* 0x5013 */ 40960, 25600, 1023, 1000, 625, -2, 0x13, false, sizeof(easypen_m610x_control_rdesc), easypen_m610x_control_rdesc}, {USB_DEVICE_ID_KYE_EASYPEN_340, /* 0x5014 */ 10240, 7680, 1023, 4, 3, 0, 0x13, false}, {USB_DEVICE_ID_KYE_PENSKETCH_M912, /* 0x5015 */ 61440, 46080, 2047, 12, 9, 0, 0x13, true, sizeof(pensketch_m912_control_rdesc), pensketch_m912_control_rdesc}, {USB_DEVICE_ID_KYE_MOUSEPEN_M508WX, /* 0x5016 */ 40960, 25600, 2047, 8, 5, 0, 0x13, true, sizeof(mousepen_m508wx_control_rdesc), mousepen_m508wx_control_rdesc}, {USB_DEVICE_ID_KYE_MOUSEPEN_M508X, /* 0x5017 */ 40960, 25600, 2047, 8, 5, 0, 0x13, true, sizeof(mousepen_m508x_control_rdesc), mousepen_m508x_control_rdesc}, {USB_DEVICE_ID_KYE_EASYPEN_M406XE, /* 0x5019 */ 15360, 10240, 1023, 6, 4, 0, 0x13, false, sizeof(easypen_m406xe_control_rdesc), easypen_m406xe_control_rdesc}, {USB_DEVICE_ID_KYE_MOUSEPEN_I608X_V2, /* 0x501A */ 40960, 30720, 2047, 8, 6, 0, 0x13, true}, {USB_DEVICE_ID_KYE_PENSKETCH_T609A, /* 0x501B */ 43520, 28160, 1023, 85, 55, -1, 0x13, false, sizeof(pensketch_t609a_control_rdesc), pensketch_t609a_control_rdesc}, {} }; static __u8 *kye_consumer_control_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize, int offset, const char *device_name) { /* * the fixup that need to be done: * - change Usage Maximum in the Consumer Control * (report ID 3) to a reasonable value */ if (*rsize >= offset + 31 && /* Usage Page (Consumer Devices) */ rdesc[offset] == 0x05 && rdesc[offset + 1] == 0x0c && /* Usage (Consumer Control) */ rdesc[offset + 2] == 0x09 && rdesc[offset + 3] == 0x01 && /* Usage Maximum > 12287 */ rdesc[offset + 10] == 0x2a && rdesc[offset + 12] > 0x2f) { hid_info(hdev, "fixing up %s report descriptor\n", device_name); rdesc[offset + 12] = 0x2f; } return rdesc; } /* * Fix tablet descriptor of so-called "DataFormat 2". * * Though we may achieve a usable descriptor from original vendor-defined one, * some problems exist: * - Their Logical Maximum never exceed 32767 (7F FF), though device do report * values greater than that; * - Physical Maximums are arbitrarily filled (always equal to Logical * Maximum); * - Detail for control buttons are not provided (a vendor-defined Usage Page * with fixed content). * * Thus we use a pre-defined parameter table rather than digging it from * original descriptor. * * We may as well write a fallback routine for unrecognized kye tablet, but it's * clear kye are unlikely to produce new models in the foreseeable future, so we * simply enumerate all possible models. */ static __u8 *kye_tablet_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize) { const struct kye_tablet_info *info; __u8 *newdesc = rdesc; if (*rsize < sizeof(kye_tablet_rdesc)) { hid_warn(hdev, "tablet report size too small, or kye_tablet_rdesc unexpectedly large\n"); return rdesc; } for (info = kye_tablets_info; info->product; info++) { if (hdev->product == info->product) break; } if (!info->product) { hid_err(hdev, "tablet unknown, someone forget to add kye_tablet_info entry?\n"); return rdesc; } memcpy(newdesc, kye_tablet_rdesc, sizeof(kye_tablet_rdesc)); put_unaligned_le32(info->x_logical_maximum, newdesc + 66); put_unaligned_le32(info->x_physical_maximum, newdesc + 72); newdesc[77] = info->unit; newdesc[79] = info->unit_exponent; put_unaligned_le32(info->y_logical_maximum, newdesc + 87); put_unaligned_le32(info->y_physical_maximum, newdesc + 92); put_unaligned_le32(info->pressure_logical_maximum, newdesc + 104); newdesc += sizeof(kye_tablet_rdesc); if (info->has_mouse) { if (newdesc + sizeof(kye_tablet_mouse_rdesc) > rdesc + *rsize) hid_err(hdev, "control desc unexpectedly large\n"); else { memcpy(newdesc, kye_tablet_mouse_rdesc, sizeof(kye_tablet_mouse_rdesc)); put_unaligned_le32(info->x_logical_maximum, newdesc + 44); put_unaligned_le32(info->x_physical_maximum, newdesc + 50); newdesc[55] = info->unit; newdesc[57] = info->unit_exponent; put_unaligned_le32(info->y_logical_maximum, newdesc + 65); put_unaligned_le32(info->y_physical_maximum, newdesc + 70); newdesc += sizeof(kye_tablet_mouse_rdesc); } } if (info->control_rsize) { if (newdesc + info->control_rsize > rdesc + *rsize) hid_err(hdev, "control desc unexpectedly large\n"); else { memcpy(newdesc, info->control_rdesc, info->control_rsize); newdesc += info->control_rsize; } } *rsize = newdesc - rdesc; return rdesc; } static const __u8 *kye_report_fixup(struct hid_device *hdev, __u8 *rdesc, unsigned int *rsize) { switch (hdev->product) { case USB_DEVICE_ID_KYE_ERGO_525V: /* the fixups that need to be done: * - change led usage page to button for extra buttons * - report size 8 count 1 must be size 1 count 8 for button * bitfield * - change the button usage range to 4-7 for the extra * buttons */ if (*rsize >= 75 && rdesc[61] == 0x05 && rdesc[62] == 0x08 && rdesc[63] == 0x19 && rdesc[64] == 0x08 && rdesc[65] == 0x29 && rdesc[66] == 0x0f && rdesc[71] == 0x75 && rdesc[72] == 0x08 && rdesc[73] == 0x95 && rdesc[74] == 0x01) { hid_info(hdev, "fixing up Kye/Genius Ergo Mouse " "report descriptor\n"); rdesc[62] = 0x09; rdesc[64] = 0x04; rdesc[66] = 0x07; rdesc[72] = 0x01; rdesc[74] = 0x08; } break; case USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE: rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104, "Genius Gila Gaming Mouse"); break; case USB_DEVICE_ID_GENIUS_MANTICORE: rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 104, "Genius Manticore Keyboard"); break; case USB_DEVICE_ID_GENIUS_GX_IMPERATOR: rdesc = kye_consumer_control_fixup(hdev, rdesc, rsize, 83, "Genius Gx Imperator Keyboard"); break; case USB_DEVICE_ID_KYE_EASYPEN_M406: case USB_DEVICE_ID_KYE_EASYPEN_M506: case USB_DEVICE_ID_KYE_EASYPEN_I405X: case USB_DEVICE_ID_KYE_MOUSEPEN_I608X: case USB_DEVICE_ID_KYE_EASYPEN_M406W: case USB_DEVICE_ID_KYE_EASYPEN_M610X: case USB_DEVICE_ID_KYE_EASYPEN_340: case USB_DEVICE_ID_KYE_PENSKETCH_M912: case USB_DEVICE_ID_KYE_MOUSEPEN_M508WX: case USB_DEVICE_ID_KYE_MOUSEPEN_M508X: case USB_DEVICE_ID_KYE_EASYPEN_M406XE: case USB_DEVICE_ID_KYE_MOUSEPEN_I608X_V2: case USB_DEVICE_ID_KYE_PENSKETCH_T609A: rdesc = kye_tablet_fixup(hdev, rdesc, rsize); break; } return rdesc; } static int kye_tablet_enable(struct hid_device *hdev) { struct list_head *list; struct list_head *head; struct hid_report *report; __s32 *value; list = &hdev->report_enum[HID_FEATURE_REPORT].report_list; list_for_each(head, list) { report = list_entry(head, struct hid_report, list); if (report->id == 5) break; } if (head == list) { hid_err(hdev, "tablet-enabling feature report not found\n"); return -ENODEV; } if (report->maxfield < 1 || report->field[0]->report_count < 7) { hid_err(hdev, "invalid tablet-enabling feature report\n"); return -ENODEV; } value = report->field[0]->value; /* * The code is for DataFormat 2 of config xml. They have no obvious * meaning (at least not configurable in Windows driver) except enabling * fully-functional tablet mode (absolute positioning). Otherwise, the * tablet acts like a relative mouse. * * Though there're magic codes for DataFormat 3 and 4, no devices use * these DataFormats. */ value[0] = 0x12; value[1] = 0x10; value[2] = 0x11; value[3] = 0x12; value[4] = 0x00; value[5] = 0x00; value[6] = 0x00; hid_hw_request(hdev, report, HID_REQ_SET_REPORT); return 0; } static int kye_probe(struct hid_device *hdev, const struct hid_device_id *id) { int ret; ret = hid_parse(hdev); if (ret) { hid_err(hdev, "parse failed\n"); goto err; } ret = hid_hw_start(hdev, HID_CONNECT_DEFAULT); if (ret) { hid_err(hdev, "hw start failed\n"); goto err; } switch (id->product) { case USB_DEVICE_ID_GENIUS_MANTICORE: /* * The manticore keyboard needs to have all the interfaces * opened at least once to be fully functional. */ if (hid_hw_open(hdev)) hid_hw_close(hdev); break; case USB_DEVICE_ID_KYE_EASYPEN_M406: case USB_DEVICE_ID_KYE_EASYPEN_M506: case USB_DEVICE_ID_KYE_EASYPEN_I405X: case USB_DEVICE_ID_KYE_MOUSEPEN_I608X: case USB_DEVICE_ID_KYE_EASYPEN_M406W: case USB_DEVICE_ID_KYE_EASYPEN_M610X: case USB_DEVICE_ID_KYE_EASYPEN_340: case USB_DEVICE_ID_KYE_PENSKETCH_M912: case USB_DEVICE_ID_KYE_MOUSEPEN_M508WX: case USB_DEVICE_ID_KYE_MOUSEPEN_M508X: case USB_DEVICE_ID_KYE_EASYPEN_M406XE: case USB_DEVICE_ID_KYE_MOUSEPEN_I608X_V2: case USB_DEVICE_ID_KYE_PENSKETCH_T609A: ret = kye_tablet_enable(hdev); if (ret) { hid_err(hdev, "tablet enabling failed\n"); goto enabling_err; } break; } return 0; enabling_err: hid_hw_stop(hdev); err: return ret; } static const struct hid_device_id kye_devices[] = { { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_ERGO_525V) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_GILA_GAMING_MOUSE) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_MANTICORE) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_GENIUS_GX_IMPERATOR) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M506) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_I405X) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406W) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M610X) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_340) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_PENSKETCH_M912) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_M508WX) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_M508X) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_EASYPEN_M406XE) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_MOUSEPEN_I608X_V2) }, { HID_USB_DEVICE(USB_VENDOR_ID_KYE, USB_DEVICE_ID_KYE_PENSKETCH_T609A) }, { } }; MODULE_DEVICE_TABLE(hid, kye_devices); static struct hid_driver kye_driver = { .name = "kye", .id_table = kye_devices, .probe = kye_probe, .report_fixup = kye_report_fixup, }; module_hid_driver(kye_driver); MODULE_DESCRIPTION("HID driver for Kye/Genius devices not fully compliant with HID standard"); MODULE_LICENSE("GPL"); |
17 5 1 13 54 55 55 25 42 42 25 25 42 43 43 43 43 1 1 42 41 42 1 42 41 24 25 42 43 42 42 42 42 41 15 15 1 4 4 4 11 32 8 29 29 28 8 28 28 28 13 27 9 26 2 38 38 38 38 38 25 25 25 25 25 25 2 25 12 25 3 3 11 11 11 11 11 11 10 10 6 17 14 16 15 14 13 12 11 11 11 11 17 11 11 2 9 5 9 5 9 5 9 5 5 9 4 4 4 9 7 3 6 1 6 4 1 3 3 2 5 6 8 5 4 4 4 3 18 18 35 28 34 12 35 35 2 35 35 32 35 35 22 35 35 35 35 28 2 1 1 35 34 22 35 35 24 35 35 35 35 35 1 1 34 1 34 34 34 7 2 5 2 3 2 1 34 35 35 5 5 5 4 4 2 2 1 1 1 4 35 35 5 5 35 35 35 35 35 35 22 22 21 22 22 22 4 3 2 1 1 3 3 2 1 2 4 34 35 14 14 14 35 35 35 13 35 22 21 22 22 22 22 15 15 13 12 13 13 13 13 13 13 13 13 13 15 15 15 8 6 6 6 1 1 6 2 2 2 19 19 19 5 5 19 19 19 19 3 5 5 5 5 4 5 4 5 23 23 6 5 1 1 1 6 23 23 23 23 19 4 4 4 23 23 23 2 23 26 26 26 2 25 25 24 24 24 24 1 23 19 19 19 19 4 19 19 19 24 24 24 18 26 4 15 26 20 26 26 26 26 5 20 17 18 26 15 13 15 15 1 14 5 1 5 5 5 5 5 5 1 14 15 15 3 15 15 15 15 14 15 15 15 3 3 3 2 3 3 1 3 3 3 2 3 3 3 1 3 3 3 1 1 3 15 15 15 15 15 4 3 3 2 3 2 2 3 3 2 4 4 4 4 2 1 31 31 30 30 29 29 28 28 28 28 27 28 28 29 27 25 27 26 27 27 27 25 23 27 29 29 11 1 10 10 1 1 1 11 11 8 7 2 11 2 11 16 3 2 2 3 8 5 5 5 4 3 1 5 5 3 1 5 4 4 1 4 4 4 4 1 5 17 26 30 30 28 30 26 26 26 26 25 5 5 4 5 4 1 5 17 17 5 5 3 5 3 5 5 21 5 5 4 2 1 1 5 21 21 21 19 20 20 7 4 3 3 2 3 3 3 17 17 6 17 13 11 11 17 20 20 4 5 4 3 5 20 20 11 20 8 7 36 35 33 34 33 32 8 31 5 31 7 7 7 4 7 5 5 37 36 30 7 31 30 30 30 30 29 30 3 1 30 3 30 7 5 5 30 29 31 26 31 31 31 38 38 17 17 17 17 16 17 2 17 16 17 17 17 17 17 17 17 17 17 17 2 2 1 1 17 17 2 1 2 17 17 17 17 17 17 2 2 2 17 17 2 1 17 17 17 17 17 17 17 17 17 2 17 16 17 17 17 17 17 17 17 17 2 17 17 17 17 17 17 17 17 17 4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 | // SPDX-License-Identifier: GPL-2.0 #include <linux/pagewalk.h> #include <linux/mm_inline.h> #include <linux/hugetlb.h> #include <linux/huge_mm.h> #include <linux/mount.h> #include <linux/ksm.h> #include <linux/seq_file.h> #include <linux/highmem.h> #include <linux/ptrace.h> #include <linux/slab.h> #include <linux/pagemap.h> #include <linux/mempolicy.h> #include <linux/rmap.h> #include <linux/swap.h> #include <linux/sched/mm.h> #include <linux/swapops.h> #include <linux/mmu_notifier.h> #include <linux/page_idle.h> #include <linux/shmem_fs.h> #include <linux/uaccess.h> #include <linux/pkeys.h> #include <linux/minmax.h> #include <linux/overflow.h> #include <linux/buildid.h> #include <asm/elf.h> #include <asm/tlb.h> #include <asm/tlbflush.h> #include "internal.h" #define SEQ_PUT_DEC(str, val) \ seq_put_decimal_ull_width(m, str, (val) << (PAGE_SHIFT-10), 8) void task_mem(struct seq_file *m, struct mm_struct *mm) { unsigned long text, lib, swap, anon, file, shmem; unsigned long hiwater_vm, total_vm, hiwater_rss, total_rss; anon = get_mm_counter(mm, MM_ANONPAGES); file = get_mm_counter(mm, MM_FILEPAGES); shmem = get_mm_counter(mm, MM_SHMEMPAGES); /* * Note: to minimize their overhead, mm maintains hiwater_vm and * hiwater_rss only when about to *lower* total_vm or rss. Any * collector of these hiwater stats must therefore get total_vm * and rss too, which will usually be the higher. Barriers? not * worth the effort, such snapshots can always be inconsistent. */ hiwater_vm = total_vm = mm->total_vm; if (hiwater_vm < mm->hiwater_vm) hiwater_vm = mm->hiwater_vm; hiwater_rss = total_rss = anon + file + shmem; if (hiwater_rss < mm->hiwater_rss) hiwater_rss = mm->hiwater_rss; /* split executable areas between text and lib */ text = PAGE_ALIGN(mm->end_code) - (mm->start_code & PAGE_MASK); text = min(text, mm->exec_vm << PAGE_SHIFT); lib = (mm->exec_vm << PAGE_SHIFT) - text; swap = get_mm_counter(mm, MM_SWAPENTS); SEQ_PUT_DEC("VmPeak:\t", hiwater_vm); SEQ_PUT_DEC(" kB\nVmSize:\t", total_vm); SEQ_PUT_DEC(" kB\nVmLck:\t", mm->locked_vm); SEQ_PUT_DEC(" kB\nVmPin:\t", atomic64_read(&mm->pinned_vm)); SEQ_PUT_DEC(" kB\nVmHWM:\t", hiwater_rss); SEQ_PUT_DEC(" kB\nVmRSS:\t", total_rss); SEQ_PUT_DEC(" kB\nRssAnon:\t", anon); SEQ_PUT_DEC(" kB\nRssFile:\t", file); SEQ_PUT_DEC(" kB\nRssShmem:\t", shmem); SEQ_PUT_DEC(" kB\nVmData:\t", mm->data_vm); SEQ_PUT_DEC(" kB\nVmStk:\t", mm->stack_vm); seq_put_decimal_ull_width(m, " kB\nVmExe:\t", text >> 10, 8); seq_put_decimal_ull_width(m, " kB\nVmLib:\t", lib >> 10, 8); seq_put_decimal_ull_width(m, " kB\nVmPTE:\t", mm_pgtables_bytes(mm) >> 10, 8); SEQ_PUT_DEC(" kB\nVmSwap:\t", swap); seq_puts(m, " kB\n"); hugetlb_report_usage(m, mm); } #undef SEQ_PUT_DEC unsigned long task_vsize(struct mm_struct *mm) { return PAGE_SIZE * mm->total_vm; } unsigned long task_statm(struct mm_struct *mm, unsigned long *shared, unsigned long *text, unsigned long *data, unsigned long *resident) { *shared = get_mm_counter(mm, MM_FILEPAGES) + get_mm_counter(mm, MM_SHMEMPAGES); *text = (PAGE_ALIGN(mm->end_code) - (mm->start_code & PAGE_MASK)) >> PAGE_SHIFT; *data = mm->data_vm + mm->stack_vm; *resident = *shared + get_mm_counter(mm, MM_ANONPAGES); return mm->total_vm; } #ifdef CONFIG_NUMA /* * Save get_task_policy() for show_numa_map(). */ static void hold_task_mempolicy(struct proc_maps_private *priv) { struct task_struct *task = priv->task; task_lock(task); priv->task_mempolicy = get_task_policy(task); mpol_get(priv->task_mempolicy); task_unlock(task); } static void release_task_mempolicy(struct proc_maps_private *priv) { mpol_put(priv->task_mempolicy); } #else static void hold_task_mempolicy(struct proc_maps_private *priv) { } static void release_task_mempolicy(struct proc_maps_private *priv) { } #endif static struct vm_area_struct *proc_get_vma(struct proc_maps_private *priv, loff_t *ppos) { struct vm_area_struct *vma = vma_next(&priv->iter); if (vma) { *ppos = vma->vm_start; } else { *ppos = -2UL; vma = get_gate_vma(priv->mm); } return vma; } static void *m_start(struct seq_file *m, loff_t *ppos) { struct proc_maps_private *priv = m->private; unsigned long last_addr = *ppos; struct mm_struct *mm; /* See m_next(). Zero at the start or after lseek. */ if (last_addr == -1UL) return NULL; priv->task = get_proc_task(priv->inode); if (!priv->task) return ERR_PTR(-ESRCH); mm = priv->mm; if (!mm || !mmget_not_zero(mm)) { put_task_struct(priv->task); priv->task = NULL; return NULL; } if (mmap_read_lock_killable(mm)) { mmput(mm); put_task_struct(priv->task); priv->task = NULL; return ERR_PTR(-EINTR); } vma_iter_init(&priv->iter, mm, last_addr); hold_task_mempolicy(priv); if (last_addr == -2UL) return get_gate_vma(mm); return proc_get_vma(priv, ppos); } static void *m_next(struct seq_file *m, void *v, loff_t *ppos) { if (*ppos == -2UL) { *ppos = -1UL; return NULL; } return proc_get_vma(m->private, ppos); } static void m_stop(struct seq_file *m, void *v) { struct proc_maps_private *priv = m->private; struct mm_struct *mm = priv->mm; if (!priv->task) return; release_task_mempolicy(priv); mmap_read_unlock(mm); mmput(mm); put_task_struct(priv->task); priv->task = NULL; } static int proc_maps_open(struct inode *inode, struct file *file, const struct seq_operations *ops, int psize) { struct proc_maps_private *priv = __seq_open_private(file, ops, psize); if (!priv) return -ENOMEM; priv->inode = inode; priv->mm = proc_mem_open(inode, PTRACE_MODE_READ); if (IS_ERR(priv->mm)) { int err = PTR_ERR(priv->mm); seq_release_private(inode, file); return err; } return 0; } static int proc_map_release(struct inode *inode, struct file *file) { struct seq_file *seq = file->private_data; struct proc_maps_private *priv = seq->private; if (priv->mm) mmdrop(priv->mm); return seq_release_private(inode, file); } static int do_maps_open(struct inode *inode, struct file *file, const struct seq_operations *ops) { return proc_maps_open(inode, file, ops, sizeof(struct proc_maps_private)); } static void get_vma_name(struct vm_area_struct *vma, const struct path **path, const char **name, const char **name_fmt) { struct anon_vma_name *anon_name = vma->vm_mm ? anon_vma_name(vma) : NULL; *name = NULL; *path = NULL; *name_fmt = NULL; /* * Print the dentry name for named mappings, and a * special [heap] marker for the heap: */ if (vma->vm_file) { /* * If user named this anon shared memory via * prctl(PR_SET_VMA ..., use the provided name. */ if (anon_name) { *name_fmt = "[anon_shmem:%s]"; *name = anon_name->name; } else { *path = file_user_path(vma->vm_file); } return; } if (vma->vm_ops && vma->vm_ops->name) { *name = vma->vm_ops->name(vma); if (*name) return; } *name = arch_vma_name(vma); if (*name) return; if (!vma->vm_mm) { *name = "[vdso]"; return; } if (vma_is_initial_heap(vma)) { *name = "[heap]"; return; } if (vma_is_initial_stack(vma)) { *name = "[stack]"; return; } if (anon_name) { *name_fmt = "[anon:%s]"; *name = anon_name->name; return; } } static void show_vma_header_prefix(struct seq_file *m, unsigned long start, unsigned long end, vm_flags_t flags, unsigned long long pgoff, dev_t dev, unsigned long ino) { seq_setwidth(m, 25 + sizeof(void *) * 6 - 1); seq_put_hex_ll(m, NULL, start, 8); seq_put_hex_ll(m, "-", end, 8); seq_putc(m, ' '); seq_putc(m, flags & VM_READ ? 'r' : '-'); seq_putc(m, flags & VM_WRITE ? 'w' : '-'); seq_putc(m, flags & VM_EXEC ? 'x' : '-'); seq_putc(m, flags & VM_MAYSHARE ? 's' : 'p'); seq_put_hex_ll(m, " ", pgoff, 8); seq_put_hex_ll(m, " ", MAJOR(dev), 2); seq_put_hex_ll(m, ":", MINOR(dev), 2); seq_put_decimal_ull(m, " ", ino); seq_putc(m, ' '); } static void show_map_vma(struct seq_file *m, struct vm_area_struct *vma) { const struct path *path; const char *name_fmt, *name; vm_flags_t flags = vma->vm_flags; unsigned long ino = 0; unsigned long long pgoff = 0; unsigned long start, end; dev_t dev = 0; if (vma->vm_file) { const struct inode *inode = file_user_inode(vma->vm_file); dev = inode->i_sb->s_dev; ino = inode->i_ino; pgoff = ((loff_t)vma->vm_pgoff) << PAGE_SHIFT; } start = vma->vm_start; end = vma->vm_end; show_vma_header_prefix(m, start, end, flags, pgoff, dev, ino); get_vma_name(vma, &path, &name, &name_fmt); if (path) { seq_pad(m, ' '); seq_path(m, path, "\n"); } else if (name_fmt) { seq_pad(m, ' '); seq_printf(m, name_fmt, name); } else if (name) { seq_pad(m, ' '); seq_puts(m, name); } seq_putc(m, '\n'); } static int show_map(struct seq_file *m, void *v) { show_map_vma(m, v); return 0; } static const struct seq_operations proc_pid_maps_op = { .start = m_start, .next = m_next, .stop = m_stop, .show = show_map }; static int pid_maps_open(struct inode *inode, struct file *file) { return do_maps_open(inode, file, &proc_pid_maps_op); } #define PROCMAP_QUERY_VMA_FLAGS ( \ PROCMAP_QUERY_VMA_READABLE | \ PROCMAP_QUERY_VMA_WRITABLE | \ PROCMAP_QUERY_VMA_EXECUTABLE | \ PROCMAP_QUERY_VMA_SHARED \ ) #define PROCMAP_QUERY_VALID_FLAGS_MASK ( \ PROCMAP_QUERY_COVERING_OR_NEXT_VMA | \ PROCMAP_QUERY_FILE_BACKED_VMA | \ PROCMAP_QUERY_VMA_FLAGS \ ) static int query_vma_setup(struct mm_struct *mm) { return mmap_read_lock_killable(mm); } static void query_vma_teardown(struct mm_struct *mm, struct vm_area_struct *vma) { mmap_read_unlock(mm); } static struct vm_area_struct *query_vma_find_by_addr(struct mm_struct *mm, unsigned long addr) { return find_vma(mm, addr); } static struct vm_area_struct *query_matching_vma(struct mm_struct *mm, unsigned long addr, u32 flags) { struct vm_area_struct *vma; next_vma: vma = query_vma_find_by_addr(mm, addr); if (!vma) goto no_vma; /* user requested only file-backed VMA, keep iterating */ if ((flags & PROCMAP_QUERY_FILE_BACKED_VMA) && !vma->vm_file) goto skip_vma; /* VMA permissions should satisfy query flags */ if (flags & PROCMAP_QUERY_VMA_FLAGS) { u32 perm = 0; if (flags & PROCMAP_QUERY_VMA_READABLE) perm |= VM_READ; if (flags & PROCMAP_QUERY_VMA_WRITABLE) perm |= VM_WRITE; if (flags & PROCMAP_QUERY_VMA_EXECUTABLE) perm |= VM_EXEC; if (flags & PROCMAP_QUERY_VMA_SHARED) perm |= VM_MAYSHARE; if ((vma->vm_flags & perm) != perm) goto skip_vma; } /* found covering VMA or user is OK with the matching next VMA */ if ((flags & PROCMAP_QUERY_COVERING_OR_NEXT_VMA) || vma->vm_start <= addr) return vma; skip_vma: /* * If the user needs closest matching VMA, keep iterating. */ addr = vma->vm_end; if (flags & PROCMAP_QUERY_COVERING_OR_NEXT_VMA) goto next_vma; no_vma: return ERR_PTR(-ENOENT); } static int do_procmap_query(struct proc_maps_private *priv, void __user *uarg) { struct procmap_query karg; struct vm_area_struct *vma; struct mm_struct *mm; const char *name = NULL; char build_id_buf[BUILD_ID_SIZE_MAX], *name_buf = NULL; __u64 usize; int err; if (copy_from_user(&usize, (void __user *)uarg, sizeof(usize))) return -EFAULT; /* argument struct can never be that large, reject abuse */ if (usize > PAGE_SIZE) return -E2BIG; /* argument struct should have at least query_flags and query_addr fields */ if (usize < offsetofend(struct procmap_query, query_addr)) return -EINVAL; err = copy_struct_from_user(&karg, sizeof(karg), uarg, usize); if (err) return err; /* reject unknown flags */ if (karg.query_flags & ~PROCMAP_QUERY_VALID_FLAGS_MASK) return -EINVAL; /* either both buffer address and size are set, or both should be zero */ if (!!karg.vma_name_size != !!karg.vma_name_addr) return -EINVAL; if (!!karg.build_id_size != !!karg.build_id_addr) return -EINVAL; mm = priv->mm; if (!mm || !mmget_not_zero(mm)) return -ESRCH; err = query_vma_setup(mm); if (err) { mmput(mm); return err; } vma = query_matching_vma(mm, karg.query_addr, karg.query_flags); if (IS_ERR(vma)) { err = PTR_ERR(vma); vma = NULL; goto out; } karg.vma_start = vma->vm_start; karg.vma_end = vma->vm_end; karg.vma_flags = 0; if (vma->vm_flags & VM_READ) karg.vma_flags |= PROCMAP_QUERY_VMA_READABLE; if (vma->vm_flags & VM_WRITE) karg.vma_flags |= PROCMAP_QUERY_VMA_WRITABLE; if (vma->vm_flags & VM_EXEC) karg.vma_flags |= PROCMAP_QUERY_VMA_EXECUTABLE; if (vma->vm_flags & VM_MAYSHARE) karg.vma_flags |= PROCMAP_QUERY_VMA_SHARED; karg.vma_page_size = vma_kernel_pagesize(vma); if (vma->vm_file) { const struct inode *inode = file_user_inode(vma->vm_file); karg.vma_offset = ((__u64)vma->vm_pgoff) << PAGE_SHIFT; karg.dev_major = MAJOR(inode->i_sb->s_dev); karg.dev_minor = MINOR(inode->i_sb->s_dev); karg.inode = inode->i_ino; } else { karg.vma_offset = 0; karg.dev_major = 0; karg.dev_minor = 0; karg.inode = 0; } if (karg.build_id_size) { __u32 build_id_sz; err = build_id_parse(vma, build_id_buf, &build_id_sz); if (err) { karg.build_id_size = 0; } else { if (karg.build_id_size < build_id_sz) { err = -ENAMETOOLONG; goto out; } karg.build_id_size = build_id_sz; } } if (karg.vma_name_size) { size_t name_buf_sz = min_t(size_t, PATH_MAX, karg.vma_name_size); const struct path *path; const char *name_fmt; size_t name_sz = 0; get_vma_name(vma, &path, &name, &name_fmt); if (path || name_fmt || name) { name_buf = kmalloc(name_buf_sz, GFP_KERNEL); if (!name_buf) { err = -ENOMEM; goto out; } } if (path) { name = d_path(path, name_buf, name_buf_sz); if (IS_ERR(name)) { err = PTR_ERR(name); goto out; } name_sz = name_buf + name_buf_sz - name; } else if (name || name_fmt) { name_sz = 1 + snprintf(name_buf, name_buf_sz, name_fmt ?: "%s", name); name = name_buf; } if (name_sz > name_buf_sz) { err = -ENAMETOOLONG; goto out; } karg.vma_name_size = name_sz; } /* unlock vma or mmap_lock, and put mm_struct before copying data to user */ query_vma_teardown(mm, vma); mmput(mm); if (karg.vma_name_size && copy_to_user(u64_to_user_ptr(karg.vma_name_addr), name, karg.vma_name_size)) { kfree(name_buf); return -EFAULT; } kfree(name_buf); if (karg.build_id_size && copy_to_user(u64_to_user_ptr(karg.build_id_addr), build_id_buf, karg.build_id_size)) return -EFAULT; if (copy_to_user(uarg, &karg, min_t(size_t, sizeof(karg), usize))) return -EFAULT; return 0; out: query_vma_teardown(mm, vma); mmput(mm); kfree(name_buf); return err; } static long procfs_procmap_ioctl(struct file *file, unsigned int cmd, unsigned long arg) { struct seq_file *seq = file->private_data; struct proc_maps_private *priv = seq->private; switch (cmd) { case PROCMAP_QUERY: return do_procmap_query(priv, (void __user *)arg); default: return -ENOIOCTLCMD; } } const struct file_operations proc_pid_maps_operations = { .open = pid_maps_open, .read = seq_read, .llseek = seq_lseek, .release = proc_map_release, .unlocked_ioctl = procfs_procmap_ioctl, .compat_ioctl = compat_ptr_ioctl, }; /* * Proportional Set Size(PSS): my share of RSS. * * PSS of a process is the count of pages it has in memory, where each * page is divided by the number of processes sharing it. So if a * process has 1000 pages all to itself, and 1000 shared with one other * process, its PSS will be 1500. * * To keep (accumulated) division errors low, we adopt a 64bit * fixed-point pss counter to minimize division errors. So (pss >> * PSS_SHIFT) would be the real byte count. * * A shift of 12 before division means (assuming 4K page size): * - 1M 3-user-pages add up to 8KB errors; * - supports mapcount up to 2^24, or 16M; * - supports PSS up to 2^52 bytes, or 4PB. */ #define PSS_SHIFT 12 #ifdef CONFIG_PROC_PAGE_MONITOR struct mem_size_stats { unsigned long resident; unsigned long shared_clean; unsigned long shared_dirty; unsigned long private_clean; unsigned long private_dirty; unsigned long referenced; unsigned long anonymous; unsigned long lazyfree; unsigned long anonymous_thp; unsigned long shmem_thp; unsigned long file_thp; unsigned long swap; unsigned long shared_hugetlb; unsigned long private_hugetlb; unsigned long ksm; u64 pss; u64 pss_anon; u64 pss_file; u64 pss_shmem; u64 pss_dirty; u64 pss_locked; u64 swap_pss; }; static void smaps_page_accumulate(struct mem_size_stats *mss, struct folio *folio, unsigned long size, unsigned long pss, bool dirty, bool locked, bool private) { mss->pss += pss; if (folio_test_anon(folio)) mss->pss_anon += pss; else if (folio_test_swapbacked(folio)) mss->pss_shmem += pss; else mss->pss_file += pss; if (locked) mss->pss_locked += pss; if (dirty || folio_test_dirty(folio)) { mss->pss_dirty += pss; if (private) mss->private_dirty += size; else mss->shared_dirty += size; } else { if (private) mss->private_clean += size; else mss->shared_clean += size; } } static void smaps_account(struct mem_size_stats *mss, struct page *page, bool compound, bool young, bool dirty, bool locked, bool present) { struct folio *folio = page_folio(page); int i, nr = compound ? compound_nr(page) : 1; unsigned long size = nr * PAGE_SIZE; bool exclusive; int mapcount; /* * First accumulate quantities that depend only on |size| and the type * of the compound page. */ if (folio_test_anon(folio)) { mss->anonymous += size; if (!folio_test_swapbacked(folio) && !dirty && !folio_test_dirty(folio)) mss->lazyfree += size; } if (folio_test_ksm(folio)) mss->ksm += size; mss->resident += size; /* Accumulate the size in pages that have been accessed. */ if (young || folio_test_young(folio) || folio_test_referenced(folio)) mss->referenced += size; /* * Then accumulate quantities that may depend on sharing, or that may * differ page-by-page. * * refcount == 1 for present entries guarantees that the folio is mapped * exactly once. For large folios this implies that exactly one * PTE/PMD/... maps (a part of) this folio. * * Treat all non-present entries (where relying on the mapcount and * refcount doesn't make sense) as "maybe shared, but not sure how * often". We treat device private entries as being fake-present. * * Note that it would not be safe to read the mapcount especially for * pages referenced by migration entries, even with the PTL held. */ if (folio_ref_count(folio) == 1 || !present) { smaps_page_accumulate(mss, folio, size, size << PSS_SHIFT, dirty, locked, present); return; } if (IS_ENABLED(CONFIG_NO_PAGE_MAPCOUNT)) { mapcount = folio_average_page_mapcount(folio); exclusive = !folio_maybe_mapped_shared(folio); } /* * We obtain a snapshot of the mapcount. Without holding the folio lock * this snapshot can be slightly wrong as we cannot always read the * mapcount atomically. */ for (i = 0; i < nr; i++, page++) { unsigned long pss = PAGE_SIZE << PSS_SHIFT; if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT)) { mapcount = folio_precise_page_mapcount(folio, page); exclusive = mapcount < 2; } if (mapcount >= 2) pss /= mapcount; smaps_page_accumulate(mss, folio, PAGE_SIZE, pss, dirty, locked, exclusive); } } #ifdef CONFIG_SHMEM static int smaps_pte_hole(unsigned long addr, unsigned long end, __always_unused int depth, struct mm_walk *walk) { struct mem_size_stats *mss = walk->private; struct vm_area_struct *vma = walk->vma; mss->swap += shmem_partial_swap_usage(walk->vma->vm_file->f_mapping, linear_page_index(vma, addr), linear_page_index(vma, end)); return 0; } #else #define smaps_pte_hole NULL #endif /* CONFIG_SHMEM */ static void smaps_pte_hole_lookup(unsigned long addr, struct mm_walk *walk) { #ifdef CONFIG_SHMEM if (walk->ops->pte_hole) { /* depth is not used */ smaps_pte_hole(addr, addr + PAGE_SIZE, 0, walk); } #endif } static void smaps_pte_entry(pte_t *pte, unsigned long addr, struct mm_walk *walk) { struct mem_size_stats *mss = walk->private; struct vm_area_struct *vma = walk->vma; bool locked = !!(vma->vm_flags & VM_LOCKED); struct page *page = NULL; bool present = false, young = false, dirty = false; pte_t ptent = ptep_get(pte); if (pte_present(ptent)) { page = vm_normal_page(vma, addr, ptent); young = pte_young(ptent); dirty = pte_dirty(ptent); present = true; } else if (is_swap_pte(ptent)) { swp_entry_t swpent = pte_to_swp_entry(ptent); if (!non_swap_entry(swpent)) { int mapcount; mss->swap += PAGE_SIZE; mapcount = swp_swapcount(swpent); if (mapcount >= 2) { u64 pss_delta = (u64)PAGE_SIZE << PSS_SHIFT; do_div(pss_delta, mapcount); mss->swap_pss += pss_delta; } else { mss->swap_pss += (u64)PAGE_SIZE << PSS_SHIFT; } } else if (is_pfn_swap_entry(swpent)) { if (is_device_private_entry(swpent)) present = true; page = pfn_swap_entry_to_page(swpent); } } else { smaps_pte_hole_lookup(addr, walk); return; } if (!page) return; smaps_account(mss, page, false, young, dirty, locked, present); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, struct mm_walk *walk) { struct mem_size_stats *mss = walk->private; struct vm_area_struct *vma = walk->vma; bool locked = !!(vma->vm_flags & VM_LOCKED); struct page *page = NULL; bool present = false; struct folio *folio; if (pmd_present(*pmd)) { page = vm_normal_page_pmd(vma, addr, *pmd); present = true; } else if (unlikely(thp_migration_supported() && is_swap_pmd(*pmd))) { swp_entry_t entry = pmd_to_swp_entry(*pmd); if (is_pfn_swap_entry(entry)) page = pfn_swap_entry_to_page(entry); } if (IS_ERR_OR_NULL(page)) return; folio = page_folio(page); if (folio_test_anon(folio)) mss->anonymous_thp += HPAGE_PMD_SIZE; else if (folio_test_swapbacked(folio)) mss->shmem_thp += HPAGE_PMD_SIZE; else if (folio_is_zone_device(folio)) /* pass */; else mss->file_thp += HPAGE_PMD_SIZE; smaps_account(mss, page, true, pmd_young(*pmd), pmd_dirty(*pmd), locked, present); } #else static void smaps_pmd_entry(pmd_t *pmd, unsigned long addr, struct mm_walk *walk) { } #endif static int smaps_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct vm_area_struct *vma = walk->vma; pte_t *pte; spinlock_t *ptl; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { smaps_pmd_entry(pmd, addr, walk); spin_unlock(ptl); goto out; } pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (!pte) { walk->action = ACTION_AGAIN; return 0; } for (; addr != end; pte++, addr += PAGE_SIZE) smaps_pte_entry(pte, addr, walk); pte_unmap_unlock(pte - 1, ptl); out: cond_resched(); return 0; } static void show_smap_vma_flags(struct seq_file *m, struct vm_area_struct *vma) { /* * Don't forget to update Documentation/ on changes. * * The length of the second argument of mnemonics[] * needs to be 3 instead of previously set 2 * (i.e. from [BITS_PER_LONG][2] to [BITS_PER_LONG][3]) * to avoid spurious * -Werror=unterminated-string-initialization warning * with GCC 15 */ static const char mnemonics[BITS_PER_LONG][3] = { /* * In case if we meet a flag we don't know about. */ [0 ... (BITS_PER_LONG-1)] = "??", [ilog2(VM_READ)] = "rd", [ilog2(VM_WRITE)] = "wr", [ilog2(VM_EXEC)] = "ex", [ilog2(VM_SHARED)] = "sh", [ilog2(VM_MAYREAD)] = "mr", [ilog2(VM_MAYWRITE)] = "mw", [ilog2(VM_MAYEXEC)] = "me", [ilog2(VM_MAYSHARE)] = "ms", [ilog2(VM_GROWSDOWN)] = "gd", [ilog2(VM_PFNMAP)] = "pf", [ilog2(VM_LOCKED)] = "lo", [ilog2(VM_IO)] = "io", [ilog2(VM_SEQ_READ)] = "sr", [ilog2(VM_RAND_READ)] = "rr", [ilog2(VM_DONTCOPY)] = "dc", [ilog2(VM_DONTEXPAND)] = "de", [ilog2(VM_LOCKONFAULT)] = "lf", [ilog2(VM_ACCOUNT)] = "ac", [ilog2(VM_NORESERVE)] = "nr", [ilog2(VM_HUGETLB)] = "ht", [ilog2(VM_SYNC)] = "sf", [ilog2(VM_ARCH_1)] = "ar", [ilog2(VM_WIPEONFORK)] = "wf", [ilog2(VM_DONTDUMP)] = "dd", #ifdef CONFIG_ARM64_BTI [ilog2(VM_ARM64_BTI)] = "bt", #endif #ifdef CONFIG_MEM_SOFT_DIRTY [ilog2(VM_SOFTDIRTY)] = "sd", #endif [ilog2(VM_MIXEDMAP)] = "mm", [ilog2(VM_HUGEPAGE)] = "hg", [ilog2(VM_NOHUGEPAGE)] = "nh", [ilog2(VM_MERGEABLE)] = "mg", [ilog2(VM_UFFD_MISSING)]= "um", [ilog2(VM_UFFD_WP)] = "uw", #ifdef CONFIG_ARM64_MTE [ilog2(VM_MTE)] = "mt", [ilog2(VM_MTE_ALLOWED)] = "", #endif #ifdef CONFIG_ARCH_HAS_PKEYS /* These come out via ProtectionKey: */ [ilog2(VM_PKEY_BIT0)] = "", [ilog2(VM_PKEY_BIT1)] = "", [ilog2(VM_PKEY_BIT2)] = "", #if VM_PKEY_BIT3 [ilog2(VM_PKEY_BIT3)] = "", #endif #if VM_PKEY_BIT4 [ilog2(VM_PKEY_BIT4)] = "", #endif #endif /* CONFIG_ARCH_HAS_PKEYS */ #ifdef CONFIG_HAVE_ARCH_USERFAULTFD_MINOR [ilog2(VM_UFFD_MINOR)] = "ui", #endif /* CONFIG_HAVE_ARCH_USERFAULTFD_MINOR */ #ifdef CONFIG_ARCH_HAS_USER_SHADOW_STACK [ilog2(VM_SHADOW_STACK)] = "ss", #endif #if defined(CONFIG_64BIT) || defined(CONFIG_PPC32) [ilog2(VM_DROPPABLE)] = "dp", #endif #ifdef CONFIG_64BIT [ilog2(VM_SEALED)] = "sl", #endif }; size_t i; seq_puts(m, "VmFlags: "); for (i = 0; i < BITS_PER_LONG; i++) { if (!mnemonics[i][0]) continue; if (vma->vm_flags & (1UL << i)) seq_printf(m, "%s ", mnemonics[i]); } seq_putc(m, '\n'); } #ifdef CONFIG_HUGETLB_PAGE static int smaps_hugetlb_range(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct mem_size_stats *mss = walk->private; struct vm_area_struct *vma = walk->vma; pte_t ptent = huge_ptep_get(walk->mm, addr, pte); struct folio *folio = NULL; bool present = false; if (pte_present(ptent)) { folio = page_folio(pte_page(ptent)); present = true; } else if (is_swap_pte(ptent)) { swp_entry_t swpent = pte_to_swp_entry(ptent); if (is_pfn_swap_entry(swpent)) folio = pfn_swap_entry_folio(swpent); } if (folio) { /* We treat non-present entries as "maybe shared". */ if (!present || folio_maybe_mapped_shared(folio) || hugetlb_pmd_shared(pte)) mss->shared_hugetlb += huge_page_size(hstate_vma(vma)); else mss->private_hugetlb += huge_page_size(hstate_vma(vma)); } return 0; } #else #define smaps_hugetlb_range NULL #endif /* HUGETLB_PAGE */ static const struct mm_walk_ops smaps_walk_ops = { .pmd_entry = smaps_pte_range, .hugetlb_entry = smaps_hugetlb_range, .walk_lock = PGWALK_RDLOCK, }; static const struct mm_walk_ops smaps_shmem_walk_ops = { .pmd_entry = smaps_pte_range, .hugetlb_entry = smaps_hugetlb_range, .pte_hole = smaps_pte_hole, .walk_lock = PGWALK_RDLOCK, }; /* * Gather mem stats from @vma with the indicated beginning * address @start, and keep them in @mss. * * Use vm_start of @vma as the beginning address if @start is 0. */ static void smap_gather_stats(struct vm_area_struct *vma, struct mem_size_stats *mss, unsigned long start) { const struct mm_walk_ops *ops = &smaps_walk_ops; /* Invalid start */ if (start >= vma->vm_end) return; if (vma->vm_file && shmem_mapping(vma->vm_file->f_mapping)) { /* * For shared or readonly shmem mappings we know that all * swapped out pages belong to the shmem object, and we can * obtain the swap value much more efficiently. For private * writable mappings, we might have COW pages that are * not affected by the parent swapped out pages of the shmem * object, so we have to distinguish them during the page walk. * Unless we know that the shmem object (or the part mapped by * our VMA) has no swapped out pages at all. */ unsigned long shmem_swapped = shmem_swap_usage(vma); if (!start && (!shmem_swapped || (vma->vm_flags & VM_SHARED) || !(vma->vm_flags & VM_WRITE))) { mss->swap += shmem_swapped; } else { ops = &smaps_shmem_walk_ops; } } /* mmap_lock is held in m_start */ if (!start) walk_page_vma(vma, ops, mss); else walk_page_range(vma->vm_mm, start, vma->vm_end, ops, mss); } #define SEQ_PUT_DEC(str, val) \ seq_put_decimal_ull_width(m, str, (val) >> 10, 8) /* Show the contents common for smaps and smaps_rollup */ static void __show_smap(struct seq_file *m, const struct mem_size_stats *mss, bool rollup_mode) { SEQ_PUT_DEC("Rss: ", mss->resident); SEQ_PUT_DEC(" kB\nPss: ", mss->pss >> PSS_SHIFT); SEQ_PUT_DEC(" kB\nPss_Dirty: ", mss->pss_dirty >> PSS_SHIFT); if (rollup_mode) { /* * These are meaningful only for smaps_rollup, otherwise two of * them are zero, and the other one is the same as Pss. */ SEQ_PUT_DEC(" kB\nPss_Anon: ", mss->pss_anon >> PSS_SHIFT); SEQ_PUT_DEC(" kB\nPss_File: ", mss->pss_file >> PSS_SHIFT); SEQ_PUT_DEC(" kB\nPss_Shmem: ", mss->pss_shmem >> PSS_SHIFT); } SEQ_PUT_DEC(" kB\nShared_Clean: ", mss->shared_clean); SEQ_PUT_DEC(" kB\nShared_Dirty: ", mss->shared_dirty); SEQ_PUT_DEC(" kB\nPrivate_Clean: ", mss->private_clean); SEQ_PUT_DEC(" kB\nPrivate_Dirty: ", mss->private_dirty); SEQ_PUT_DEC(" kB\nReferenced: ", mss->referenced); SEQ_PUT_DEC(" kB\nAnonymous: ", mss->anonymous); SEQ_PUT_DEC(" kB\nKSM: ", mss->ksm); SEQ_PUT_DEC(" kB\nLazyFree: ", mss->lazyfree); SEQ_PUT_DEC(" kB\nAnonHugePages: ", mss->anonymous_thp); SEQ_PUT_DEC(" kB\nShmemPmdMapped: ", mss->shmem_thp); SEQ_PUT_DEC(" kB\nFilePmdMapped: ", mss->file_thp); SEQ_PUT_DEC(" kB\nShared_Hugetlb: ", mss->shared_hugetlb); seq_put_decimal_ull_width(m, " kB\nPrivate_Hugetlb: ", mss->private_hugetlb >> 10, 7); SEQ_PUT_DEC(" kB\nSwap: ", mss->swap); SEQ_PUT_DEC(" kB\nSwapPss: ", mss->swap_pss >> PSS_SHIFT); SEQ_PUT_DEC(" kB\nLocked: ", mss->pss_locked >> PSS_SHIFT); seq_puts(m, " kB\n"); } static int show_smap(struct seq_file *m, void *v) { struct vm_area_struct *vma = v; struct mem_size_stats mss = {}; smap_gather_stats(vma, &mss, 0); show_map_vma(m, vma); SEQ_PUT_DEC("Size: ", vma->vm_end - vma->vm_start); SEQ_PUT_DEC(" kB\nKernelPageSize: ", vma_kernel_pagesize(vma)); SEQ_PUT_DEC(" kB\nMMUPageSize: ", vma_mmu_pagesize(vma)); seq_puts(m, " kB\n"); __show_smap(m, &mss, false); seq_printf(m, "THPeligible: %8u\n", !!thp_vma_allowable_orders(vma, vma->vm_flags, TVA_SMAPS | TVA_ENFORCE_SYSFS, THP_ORDERS_ALL)); if (arch_pkeys_enabled()) seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); show_smap_vma_flags(m, vma); return 0; } static int show_smaps_rollup(struct seq_file *m, void *v) { struct proc_maps_private *priv = m->private; struct mem_size_stats mss = {}; struct mm_struct *mm = priv->mm; struct vm_area_struct *vma; unsigned long vma_start = 0, last_vma_end = 0; int ret = 0; VMA_ITERATOR(vmi, mm, 0); priv->task = get_proc_task(priv->inode); if (!priv->task) return -ESRCH; if (!mm || !mmget_not_zero(mm)) { ret = -ESRCH; goto out_put_task; } ret = mmap_read_lock_killable(mm); if (ret) goto out_put_mm; hold_task_mempolicy(priv); vma = vma_next(&vmi); if (unlikely(!vma)) goto empty_set; vma_start = vma->vm_start; do { smap_gather_stats(vma, &mss, 0); last_vma_end = vma->vm_end; /* * Release mmap_lock temporarily if someone wants to * access it for write request. */ if (mmap_lock_is_contended(mm)) { vma_iter_invalidate(&vmi); mmap_read_unlock(mm); ret = mmap_read_lock_killable(mm); if (ret) { release_task_mempolicy(priv); goto out_put_mm; } /* * After dropping the lock, there are four cases to * consider. See the following example for explanation. * * +------+------+-----------+ * | VMA1 | VMA2 | VMA3 | * +------+------+-----------+ * | | | | * 4k 8k 16k 400k * * Suppose we drop the lock after reading VMA2 due to * contention, then we get: * * last_vma_end = 16k * * 1) VMA2 is freed, but VMA3 exists: * * vma_next(vmi) will return VMA3. * In this case, just continue from VMA3. * * 2) VMA2 still exists: * * vma_next(vmi) will return VMA3. * In this case, just continue from VMA3. * * 3) No more VMAs can be found: * * vma_next(vmi) will return NULL. * No more things to do, just break. * * 4) (last_vma_end - 1) is the middle of a vma (VMA'): * * vma_next(vmi) will return VMA' whose range * contains last_vma_end. * Iterate VMA' from last_vma_end. */ vma = vma_next(&vmi); /* Case 3 above */ if (!vma) break; /* Case 1 and 2 above */ if (vma->vm_start >= last_vma_end) { smap_gather_stats(vma, &mss, 0); last_vma_end = vma->vm_end; continue; } /* Case 4 above */ if (vma->vm_end > last_vma_end) { smap_gather_stats(vma, &mss, last_vma_end); last_vma_end = vma->vm_end; } } } for_each_vma(vmi, vma); empty_set: show_vma_header_prefix(m, vma_start, last_vma_end, 0, 0, 0, 0); seq_pad(m, ' '); seq_puts(m, "[rollup]\n"); __show_smap(m, &mss, true); release_task_mempolicy(priv); mmap_read_unlock(mm); out_put_mm: mmput(mm); out_put_task: put_task_struct(priv->task); priv->task = NULL; return ret; } #undef SEQ_PUT_DEC static const struct seq_operations proc_pid_smaps_op = { .start = m_start, .next = m_next, .stop = m_stop, .show = show_smap }; static int pid_smaps_open(struct inode *inode, struct file *file) { return do_maps_open(inode, file, &proc_pid_smaps_op); } static int smaps_rollup_open(struct inode *inode, struct file *file) { int ret; struct proc_maps_private *priv; priv = kzalloc(sizeof(*priv), GFP_KERNEL_ACCOUNT); if (!priv) return -ENOMEM; ret = single_open(file, show_smaps_rollup, priv); if (ret) goto out_free; priv->inode = inode; priv->mm = proc_mem_open(inode, PTRACE_MODE_READ); if (IS_ERR(priv->mm)) { ret = PTR_ERR(priv->mm); single_release(inode, file); goto out_free; } return 0; out_free: kfree(priv); return ret; } static int smaps_rollup_release(struct inode *inode, struct file *file) { struct seq_file *seq = file->private_data; struct proc_maps_private *priv = seq->private; if (priv->mm) mmdrop(priv->mm); kfree(priv); return single_release(inode, file); } const struct file_operations proc_pid_smaps_operations = { .open = pid_smaps_open, .read = seq_read, .llseek = seq_lseek, .release = proc_map_release, }; const struct file_operations proc_pid_smaps_rollup_operations = { .open = smaps_rollup_open, .read = seq_read, .llseek = seq_lseek, .release = smaps_rollup_release, }; enum clear_refs_types { CLEAR_REFS_ALL = 1, CLEAR_REFS_ANON, CLEAR_REFS_MAPPED, CLEAR_REFS_SOFT_DIRTY, CLEAR_REFS_MM_HIWATER_RSS, CLEAR_REFS_LAST, }; struct clear_refs_private { enum clear_refs_types type; }; #ifdef CONFIG_MEM_SOFT_DIRTY static inline bool pte_is_pinned(struct vm_area_struct *vma, unsigned long addr, pte_t pte) { struct folio *folio; if (!pte_write(pte)) return false; if (!is_cow_mapping(vma->vm_flags)) return false; if (likely(!test_bit(MMF_HAS_PINNED, &vma->vm_mm->flags))) return false; folio = vm_normal_folio(vma, addr, pte); if (!folio) return false; return folio_maybe_dma_pinned(folio); } static inline void clear_soft_dirty(struct vm_area_struct *vma, unsigned long addr, pte_t *pte) { /* * The soft-dirty tracker uses #PF-s to catch writes * to pages, so write-protect the pte as well. See the * Documentation/admin-guide/mm/soft-dirty.rst for full description * of how soft-dirty works. */ pte_t ptent = ptep_get(pte); if (pte_present(ptent)) { pte_t old_pte; if (pte_is_pinned(vma, addr, ptent)) return; old_pte = ptep_modify_prot_start(vma, addr, pte); ptent = pte_wrprotect(old_pte); ptent = pte_clear_soft_dirty(ptent); ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); } else if (is_swap_pte(ptent)) { ptent = pte_swp_clear_soft_dirty(ptent); set_pte_at(vma->vm_mm, addr, pte, ptent); } } #else static inline void clear_soft_dirty(struct vm_area_struct *vma, unsigned long addr, pte_t *pte) { } #endif #if defined(CONFIG_MEM_SOFT_DIRTY) && defined(CONFIG_TRANSPARENT_HUGEPAGE) static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { pmd_t old, pmd = *pmdp; if (pmd_present(pmd)) { /* See comment in change_huge_pmd() */ old = pmdp_invalidate(vma, addr, pmdp); if (pmd_dirty(old)) pmd = pmd_mkdirty(pmd); if (pmd_young(old)) pmd = pmd_mkyoung(pmd); pmd = pmd_wrprotect(pmd); pmd = pmd_clear_soft_dirty(pmd); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { pmd = pmd_swp_clear_soft_dirty(pmd); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } } #else static inline void clear_soft_dirty_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { } #endif static int clear_refs_pte_range(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct clear_refs_private *cp = walk->private; struct vm_area_struct *vma = walk->vma; pte_t *pte, ptent; spinlock_t *ptl; struct folio *folio; ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { if (cp->type == CLEAR_REFS_SOFT_DIRTY) { clear_soft_dirty_pmd(vma, addr, pmd); goto out; } if (!pmd_present(*pmd)) goto out; folio = pmd_folio(*pmd); /* Clear accessed and referenced bits. */ pmdp_test_and_clear_young(vma, addr, pmd); folio_test_clear_young(folio); folio_clear_referenced(folio); out: spin_unlock(ptl); return 0; } pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl); if (!pte) { walk->action = ACTION_AGAIN; return 0; } for (; addr != end; pte++, addr += PAGE_SIZE) { ptent = ptep_get(pte); if (cp->type == CLEAR_REFS_SOFT_DIRTY) { clear_soft_dirty(vma, addr, pte); continue; } if (!pte_present(ptent)) continue; folio = vm_normal_folio(vma, addr, ptent); if (!folio) continue; /* Clear accessed and referenced bits. */ ptep_test_and_clear_young(vma, addr, pte); folio_test_clear_young(folio); folio_clear_referenced(folio); } pte_unmap_unlock(pte - 1, ptl); cond_resched(); return 0; } static int clear_refs_test_walk(unsigned long start, unsigned long end, struct mm_walk *walk) { struct clear_refs_private *cp = walk->private; struct vm_area_struct *vma = walk->vma; if (vma->vm_flags & VM_PFNMAP) return 1; /* * Writing 1 to /proc/pid/clear_refs affects all pages. * Writing 2 to /proc/pid/clear_refs only affects anonymous pages. * Writing 3 to /proc/pid/clear_refs only affects file mapped pages. * Writing 4 to /proc/pid/clear_refs affects all pages. */ if (cp->type == CLEAR_REFS_ANON && vma->vm_file) return 1; if (cp->type == CLEAR_REFS_MAPPED && !vma->vm_file) return 1; return 0; } static const struct mm_walk_ops clear_refs_walk_ops = { .pmd_entry = clear_refs_pte_range, .test_walk = clear_refs_test_walk, .walk_lock = PGWALK_WRLOCK, }; static ssize_t clear_refs_write(struct file *file, const char __user *buf, size_t count, loff_t *ppos) { struct task_struct *task; char buffer[PROC_NUMBUF] = {}; struct mm_struct *mm; struct vm_area_struct *vma; enum clear_refs_types type; int itype; int rv; if (count > sizeof(buffer) - 1) count = sizeof(buffer) - 1; if (copy_from_user(buffer, buf, count)) return -EFAULT; rv = kstrtoint(strstrip(buffer), 10, &itype); if (rv < 0) return rv; type = (enum clear_refs_types)itype; if (type < CLEAR_REFS_ALL || type >= CLEAR_REFS_LAST) return -EINVAL; task = get_proc_task(file_inode(file)); if (!task) return -ESRCH; mm = get_task_mm(task); if (mm) { VMA_ITERATOR(vmi, mm, 0); struct mmu_notifier_range range; struct clear_refs_private cp = { .type = type, }; if (mmap_write_lock_killable(mm)) { count = -EINTR; goto out_mm; } if (type == CLEAR_REFS_MM_HIWATER_RSS) { /* * Writing 5 to /proc/pid/clear_refs resets the peak * resident set size to this mm's current rss value. */ reset_mm_hiwater_rss(mm); goto out_unlock; } if (type == CLEAR_REFS_SOFT_DIRTY) { for_each_vma(vmi, vma) { if (!(vma->vm_flags & VM_SOFTDIRTY)) continue; vm_flags_clear(vma, VM_SOFTDIRTY); vma_set_page_prot(vma); } inc_tlb_flush_pending(mm); mmu_notifier_range_init(&range, MMU_NOTIFY_SOFT_DIRTY, 0, mm, 0, -1UL); mmu_notifier_invalidate_range_start(&range); } walk_page_range(mm, 0, -1, &clear_refs_walk_ops, &cp); if (type == CLEAR_REFS_SOFT_DIRTY) { mmu_notifier_invalidate_range_end(&range); flush_tlb_mm(mm); dec_tlb_flush_pending(mm); } out_unlock: mmap_write_unlock(mm); out_mm: mmput(mm); } put_task_struct(task); return count; } const struct file_operations proc_clear_refs_operations = { .write = clear_refs_write, .llseek = noop_llseek, }; typedef struct { u64 pme; } pagemap_entry_t; struct pagemapread { int pos, len; /* units: PM_ENTRY_BYTES, not bytes */ pagemap_entry_t *buffer; bool show_pfn; }; #define PAGEMAP_WALK_SIZE (PMD_SIZE) #define PAGEMAP_WALK_MASK (PMD_MASK) #define PM_ENTRY_BYTES sizeof(pagemap_entry_t) #define PM_PFRAME_BITS 55 #define PM_PFRAME_MASK GENMASK_ULL(PM_PFRAME_BITS - 1, 0) #define PM_SOFT_DIRTY BIT_ULL(55) #define PM_MMAP_EXCLUSIVE BIT_ULL(56) #define PM_UFFD_WP BIT_ULL(57) #define PM_GUARD_REGION BIT_ULL(58) #define PM_FILE BIT_ULL(61) #define PM_SWAP BIT_ULL(62) #define PM_PRESENT BIT_ULL(63) #define PM_END_OF_BUFFER 1 static inline pagemap_entry_t make_pme(u64 frame, u64 flags) { return (pagemap_entry_t) { .pme = (frame & PM_PFRAME_MASK) | flags }; } static int add_to_pagemap(pagemap_entry_t *pme, struct pagemapread *pm) { pm->buffer[pm->pos++] = *pme; if (pm->pos >= pm->len) return PM_END_OF_BUFFER; return 0; } static bool __folio_page_mapped_exclusively(struct folio *folio, struct page *page) { if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT)) return folio_precise_page_mapcount(folio, page) == 1; return !folio_maybe_mapped_shared(folio); } static int pagemap_pte_hole(unsigned long start, unsigned long end, __always_unused int depth, struct mm_walk *walk) { struct pagemapread *pm = walk->private; unsigned long addr = start; int err = 0; while (addr < end) { struct vm_area_struct *vma = find_vma(walk->mm, addr); pagemap_entry_t pme = make_pme(0, 0); /* End of address space hole, which we mark as non-present. */ unsigned long hole_end; if (vma) hole_end = min(end, vma->vm_start); else hole_end = end; for (; addr < hole_end; addr += PAGE_SIZE) { err = add_to_pagemap(&pme, pm); if (err) goto out; } if (!vma) break; /* Addresses in the VMA. */ if (vma->vm_flags & VM_SOFTDIRTY) pme = make_pme(0, PM_SOFT_DIRTY); for (; addr < min(end, vma->vm_end); addr += PAGE_SIZE) { err = add_to_pagemap(&pme, pm); if (err) goto out; } } out: return err; } static pagemap_entry_t pte_to_pagemap_entry(struct pagemapread *pm, struct vm_area_struct *vma, unsigned long addr, pte_t pte) { u64 frame = 0, flags = 0; struct page *page = NULL; struct folio *folio; if (pte_present(pte)) { if (pm->show_pfn) frame = pte_pfn(pte); flags |= PM_PRESENT; page = vm_normal_page(vma, addr, pte); if (pte_soft_dirty(pte)) flags |= PM_SOFT_DIRTY; if (pte_uffd_wp(pte)) flags |= PM_UFFD_WP; } else if (is_swap_pte(pte)) { swp_entry_t entry; if (pte_swp_soft_dirty(pte)) flags |= PM_SOFT_DIRTY; if (pte_swp_uffd_wp(pte)) flags |= PM_UFFD_WP; entry = pte_to_swp_entry(pte); if (pm->show_pfn) { pgoff_t offset; /* * For PFN swap offsets, keeping the offset field * to be PFN only to be compatible with old smaps. */ if (is_pfn_swap_entry(entry)) offset = swp_offset_pfn(entry); else offset = swp_offset(entry); frame = swp_type(entry) | (offset << MAX_SWAPFILES_SHIFT); } flags |= PM_SWAP; if (is_pfn_swap_entry(entry)) page = pfn_swap_entry_to_page(entry); if (pte_marker_entry_uffd_wp(entry)) flags |= PM_UFFD_WP; if (is_guard_swp_entry(entry)) flags |= PM_GUARD_REGION; } if (page) { folio = page_folio(page); if (!folio_test_anon(folio)) flags |= PM_FILE; if ((flags & PM_PRESENT) && __folio_page_mapped_exclusively(folio, page)) flags |= PM_MMAP_EXCLUSIVE; } if (vma->vm_flags & VM_SOFTDIRTY) flags |= PM_SOFT_DIRTY; return make_pme(frame, flags); } static int pagemap_pmd_range(pmd_t *pmdp, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct vm_area_struct *vma = walk->vma; struct pagemapread *pm = walk->private; spinlock_t *ptl; pte_t *pte, *orig_pte; int err = 0; #ifdef CONFIG_TRANSPARENT_HUGEPAGE ptl = pmd_trans_huge_lock(pmdp, vma); if (ptl) { unsigned int idx = (addr & ~PMD_MASK) >> PAGE_SHIFT; u64 flags = 0, frame = 0; pmd_t pmd = *pmdp; struct page *page = NULL; struct folio *folio = NULL; if (vma->vm_flags & VM_SOFTDIRTY) flags |= PM_SOFT_DIRTY; if (pmd_present(pmd)) { page = pmd_page(pmd); flags |= PM_PRESENT; if (pmd_soft_dirty(pmd)) flags |= PM_SOFT_DIRTY; if (pmd_uffd_wp(pmd)) flags |= PM_UFFD_WP; if (pm->show_pfn) frame = pmd_pfn(pmd) + idx; } #ifdef CONFIG_ARCH_ENABLE_THP_MIGRATION else if (is_swap_pmd(pmd)) { swp_entry_t entry = pmd_to_swp_entry(pmd); unsigned long offset; if (pm->show_pfn) { if (is_pfn_swap_entry(entry)) offset = swp_offset_pfn(entry) + idx; else offset = swp_offset(entry) + idx; frame = swp_type(entry) | (offset << MAX_SWAPFILES_SHIFT); } flags |= PM_SWAP; if (pmd_swp_soft_dirty(pmd)) flags |= PM_SOFT_DIRTY; if (pmd_swp_uffd_wp(pmd)) flags |= PM_UFFD_WP; VM_BUG_ON(!is_pmd_migration_entry(pmd)); page = pfn_swap_entry_to_page(entry); } #endif if (page) { folio = page_folio(page); if (!folio_test_anon(folio)) flags |= PM_FILE; } for (; addr != end; addr += PAGE_SIZE, idx++) { u64 cur_flags = flags; pagemap_entry_t pme; if (folio && (flags & PM_PRESENT) && __folio_page_mapped_exclusively(folio, page)) cur_flags |= PM_MMAP_EXCLUSIVE; pme = make_pme(frame, cur_flags); err = add_to_pagemap(&pme, pm); if (err) break; if (pm->show_pfn) { if (flags & PM_PRESENT) frame++; else if (flags & PM_SWAP) frame += (1 << MAX_SWAPFILES_SHIFT); } } spin_unlock(ptl); return err; } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ /* * We can assume that @vma always points to a valid one and @end never * goes beyond vma->vm_end. */ orig_pte = pte = pte_offset_map_lock(walk->mm, pmdp, addr, &ptl); if (!pte) { walk->action = ACTION_AGAIN; return err; } for (; addr < end; pte++, addr += PAGE_SIZE) { pagemap_entry_t pme; pme = pte_to_pagemap_entry(pm, vma, addr, ptep_get(pte)); err = add_to_pagemap(&pme, pm); if (err) break; } pte_unmap_unlock(orig_pte, ptl); cond_resched(); return err; } #ifdef CONFIG_HUGETLB_PAGE /* This function walks within one hugetlb entry in the single call */ static int pagemap_hugetlb_range(pte_t *ptep, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct pagemapread *pm = walk->private; struct vm_area_struct *vma = walk->vma; u64 flags = 0, frame = 0; int err = 0; pte_t pte; if (vma->vm_flags & VM_SOFTDIRTY) flags |= PM_SOFT_DIRTY; pte = huge_ptep_get(walk->mm, addr, ptep); if (pte_present(pte)) { struct folio *folio = page_folio(pte_page(pte)); if (!folio_test_anon(folio)) flags |= PM_FILE; if (!folio_maybe_mapped_shared(folio) && !hugetlb_pmd_shared(ptep)) flags |= PM_MMAP_EXCLUSIVE; if (huge_pte_uffd_wp(pte)) flags |= PM_UFFD_WP; flags |= PM_PRESENT; if (pm->show_pfn) frame = pte_pfn(pte) + ((addr & ~hmask) >> PAGE_SHIFT); } else if (pte_swp_uffd_wp_any(pte)) { flags |= PM_UFFD_WP; } for (; addr != end; addr += PAGE_SIZE) { pagemap_entry_t pme = make_pme(frame, flags); err = add_to_pagemap(&pme, pm); if (err) return err; if (pm->show_pfn && (flags & PM_PRESENT)) frame++; } cond_resched(); return err; } #else #define pagemap_hugetlb_range NULL #endif /* HUGETLB_PAGE */ static const struct mm_walk_ops pagemap_ops = { .pmd_entry = pagemap_pmd_range, .pte_hole = pagemap_pte_hole, .hugetlb_entry = pagemap_hugetlb_range, .walk_lock = PGWALK_RDLOCK, }; /* * /proc/pid/pagemap - an array mapping virtual pages to pfns * * For each page in the address space, this file contains one 64-bit entry * consisting of the following: * * Bits 0-54 page frame number (PFN) if present * Bits 0-4 swap type if swapped * Bits 5-54 swap offset if swapped * Bit 55 pte is soft-dirty (see Documentation/admin-guide/mm/soft-dirty.rst) * Bit 56 page exclusively mapped * Bit 57 pte is uffd-wp write-protected * Bit 58 pte is a guard region * Bits 59-60 zero * Bit 61 page is file-page or shared-anon * Bit 62 page swapped * Bit 63 page present * * If the page is not present but in swap, then the PFN contains an * encoding of the swap file number and the page's offset into the * swap. Unmapped pages return a null PFN. This allows determining * precisely which pages are mapped (or in swap) and comparing mapped * pages between processes. * * Efficient users of this interface will use /proc/pid/maps to * determine which areas of memory are actually mapped and llseek to * skip over unmapped regions. */ static ssize_t pagemap_read(struct file *file, char __user *buf, size_t count, loff_t *ppos) { struct mm_struct *mm = file->private_data; struct pagemapread pm; unsigned long src; unsigned long svpfn; unsigned long start_vaddr; unsigned long end_vaddr; int ret = 0, copied = 0; if (!mm || !mmget_not_zero(mm)) goto out; ret = -EINVAL; /* file position must be aligned */ if ((*ppos % PM_ENTRY_BYTES) || (count % PM_ENTRY_BYTES)) goto out_mm; ret = 0; if (!count) goto out_mm; /* do not disclose physical addresses: attack vector */ pm.show_pfn = file_ns_capable(file, &init_user_ns, CAP_SYS_ADMIN); pm.len = (PAGEMAP_WALK_SIZE >> PAGE_SHIFT); pm.buffer = kmalloc_array(pm.len, PM_ENTRY_BYTES, GFP_KERNEL); ret = -ENOMEM; if (!pm.buffer) goto out_mm; src = *ppos; svpfn = src / PM_ENTRY_BYTES; end_vaddr = mm->task_size; /* watch out for wraparound */ start_vaddr = end_vaddr; if (svpfn <= (ULONG_MAX >> PAGE_SHIFT)) { unsigned long end; ret = mmap_read_lock_killable(mm); if (ret) goto out_free; start_vaddr = untagged_addr_remote(mm, svpfn << PAGE_SHIFT); mmap_read_unlock(mm); end = start_vaddr + ((count / PM_ENTRY_BYTES) << PAGE_SHIFT); if (end >= start_vaddr && end < mm->task_size) end_vaddr = end; } /* Ensure the address is inside the task */ if (start_vaddr > mm->task_size) start_vaddr = end_vaddr; ret = 0; while (count && (start_vaddr < end_vaddr)) { int len; unsigned long end; pm.pos = 0; end = (start_vaddr + PAGEMAP_WALK_SIZE) & PAGEMAP_WALK_MASK; /* overflow ? */ if (end < start_vaddr || end > end_vaddr) end = end_vaddr; ret = mmap_read_lock_killable(mm); if (ret) goto out_free; ret = walk_page_range(mm, start_vaddr, end, &pagemap_ops, &pm); mmap_read_unlock(mm); start_vaddr = end; len = min(count, PM_ENTRY_BYTES * pm.pos); if (copy_to_user(buf, pm.buffer, len)) { ret = -EFAULT; goto out_free; } copied += len; buf += len; count -= len; } *ppos += copied; if (!ret || ret == PM_END_OF_BUFFER) ret = copied; out_free: kfree(pm.buffer); out_mm: mmput(mm); out: return ret; } static int pagemap_open(struct inode *inode, struct file *file) { struct mm_struct *mm; mm = proc_mem_open(inode, PTRACE_MODE_READ); if (IS_ERR(mm)) return PTR_ERR(mm); file->private_data = mm; return 0; } static int pagemap_release(struct inode *inode, struct file *file) { struct mm_struct *mm = file->private_data; if (mm) mmdrop(mm); return 0; } #define PM_SCAN_CATEGORIES (PAGE_IS_WPALLOWED | PAGE_IS_WRITTEN | \ PAGE_IS_FILE | PAGE_IS_PRESENT | \ PAGE_IS_SWAPPED | PAGE_IS_PFNZERO | \ PAGE_IS_HUGE | PAGE_IS_SOFT_DIRTY) #define PM_SCAN_FLAGS (PM_SCAN_WP_MATCHING | PM_SCAN_CHECK_WPASYNC) struct pagemap_scan_private { struct pm_scan_arg arg; unsigned long masks_of_interest, cur_vma_category; struct page_region *vec_buf; unsigned long vec_buf_len, vec_buf_index, found_pages; struct page_region __user *vec_out; }; static unsigned long pagemap_page_category(struct pagemap_scan_private *p, struct vm_area_struct *vma, unsigned long addr, pte_t pte) { unsigned long categories = 0; if (pte_present(pte)) { struct page *page; categories |= PAGE_IS_PRESENT; if (!pte_uffd_wp(pte)) categories |= PAGE_IS_WRITTEN; if (p->masks_of_interest & PAGE_IS_FILE) { page = vm_normal_page(vma, addr, pte); if (page && !PageAnon(page)) categories |= PAGE_IS_FILE; } if (is_zero_pfn(pte_pfn(pte))) categories |= PAGE_IS_PFNZERO; if (pte_soft_dirty(pte)) categories |= PAGE_IS_SOFT_DIRTY; } else if (is_swap_pte(pte)) { swp_entry_t swp; categories |= PAGE_IS_SWAPPED; if (!pte_swp_uffd_wp_any(pte)) categories |= PAGE_IS_WRITTEN; if (p->masks_of_interest & PAGE_IS_FILE) { swp = pte_to_swp_entry(pte); if (is_pfn_swap_entry(swp) && !folio_test_anon(pfn_swap_entry_folio(swp))) categories |= PAGE_IS_FILE; } if (pte_swp_soft_dirty(pte)) categories |= PAGE_IS_SOFT_DIRTY; } return categories; } static void make_uffd_wp_pte(struct vm_area_struct *vma, unsigned long addr, pte_t *pte, pte_t ptent) { if (pte_present(ptent)) { pte_t old_pte; old_pte = ptep_modify_prot_start(vma, addr, pte); ptent = pte_mkuffd_wp(old_pte); ptep_modify_prot_commit(vma, addr, pte, old_pte, ptent); } else if (is_swap_pte(ptent)) { ptent = pte_swp_mkuffd_wp(ptent); set_pte_at(vma->vm_mm, addr, pte, ptent); } else { set_pte_at(vma->vm_mm, addr, pte, make_pte_marker(PTE_MARKER_UFFD_WP)); } } #ifdef CONFIG_TRANSPARENT_HUGEPAGE static unsigned long pagemap_thp_category(struct pagemap_scan_private *p, struct vm_area_struct *vma, unsigned long addr, pmd_t pmd) { unsigned long categories = PAGE_IS_HUGE; if (pmd_present(pmd)) { struct page *page; categories |= PAGE_IS_PRESENT; if (!pmd_uffd_wp(pmd)) categories |= PAGE_IS_WRITTEN; if (p->masks_of_interest & PAGE_IS_FILE) { page = vm_normal_page_pmd(vma, addr, pmd); if (page && !PageAnon(page)) categories |= PAGE_IS_FILE; } if (is_zero_pfn(pmd_pfn(pmd))) categories |= PAGE_IS_PFNZERO; if (pmd_soft_dirty(pmd)) categories |= PAGE_IS_SOFT_DIRTY; } else if (is_swap_pmd(pmd)) { swp_entry_t swp; categories |= PAGE_IS_SWAPPED; if (!pmd_swp_uffd_wp(pmd)) categories |= PAGE_IS_WRITTEN; if (pmd_swp_soft_dirty(pmd)) categories |= PAGE_IS_SOFT_DIRTY; if (p->masks_of_interest & PAGE_IS_FILE) { swp = pmd_to_swp_entry(pmd); if (is_pfn_swap_entry(swp) && !folio_test_anon(pfn_swap_entry_folio(swp))) categories |= PAGE_IS_FILE; } } return categories; } static void make_uffd_wp_pmd(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { pmd_t old, pmd = *pmdp; if (pmd_present(pmd)) { old = pmdp_invalidate_ad(vma, addr, pmdp); pmd = pmd_mkuffd_wp(old); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } else if (is_migration_entry(pmd_to_swp_entry(pmd))) { pmd = pmd_swp_mkuffd_wp(pmd); set_pmd_at(vma->vm_mm, addr, pmdp, pmd); } } #endif /* CONFIG_TRANSPARENT_HUGEPAGE */ #ifdef CONFIG_HUGETLB_PAGE static unsigned long pagemap_hugetlb_category(pte_t pte) { unsigned long categories = PAGE_IS_HUGE; /* * According to pagemap_hugetlb_range(), file-backed HugeTLB * page cannot be swapped. So PAGE_IS_FILE is not checked for * swapped pages. */ if (pte_present(pte)) { categories |= PAGE_IS_PRESENT; if (!huge_pte_uffd_wp(pte)) categories |= PAGE_IS_WRITTEN; if (!PageAnon(pte_page(pte))) categories |= PAGE_IS_FILE; if (is_zero_pfn(pte_pfn(pte))) categories |= PAGE_IS_PFNZERO; if (pte_soft_dirty(pte)) categories |= PAGE_IS_SOFT_DIRTY; } else if (is_swap_pte(pte)) { categories |= PAGE_IS_SWAPPED; if (!pte_swp_uffd_wp_any(pte)) categories |= PAGE_IS_WRITTEN; if (pte_swp_soft_dirty(pte)) categories |= PAGE_IS_SOFT_DIRTY; } return categories; } static void make_uffd_wp_huge_pte(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep, pte_t ptent) { unsigned long psize; if (is_hugetlb_entry_hwpoisoned(ptent) || is_pte_marker(ptent)) return; psize = huge_page_size(hstate_vma(vma)); if (is_hugetlb_entry_migration(ptent)) set_huge_pte_at(vma->vm_mm, addr, ptep, pte_swp_mkuffd_wp(ptent), psize); else if (!huge_pte_none(ptent)) huge_ptep_modify_prot_commit(vma, addr, ptep, ptent, huge_pte_mkuffd_wp(ptent)); else set_huge_pte_at(vma->vm_mm, addr, ptep, make_pte_marker(PTE_MARKER_UFFD_WP), psize); } #endif /* CONFIG_HUGETLB_PAGE */ #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_HUGETLB_PAGE) static void pagemap_scan_backout_range(struct pagemap_scan_private *p, unsigned long addr, unsigned long end) { struct page_region *cur_buf = &p->vec_buf[p->vec_buf_index]; if (cur_buf->start != addr) cur_buf->end = addr; else cur_buf->start = cur_buf->end = 0; p->found_pages -= (end - addr) / PAGE_SIZE; } #endif static bool pagemap_scan_is_interesting_page(unsigned long categories, const struct pagemap_scan_private *p) { categories ^= p->arg.category_inverted; if ((categories & p->arg.category_mask) != p->arg.category_mask) return false; if (p->arg.category_anyof_mask && !(categories & p->arg.category_anyof_mask)) return false; return true; } static bool pagemap_scan_is_interesting_vma(unsigned long categories, const struct pagemap_scan_private *p) { unsigned long required = p->arg.category_mask & PAGE_IS_WPALLOWED; categories ^= p->arg.category_inverted; if ((categories & required) != required) return false; return true; } static int pagemap_scan_test_walk(unsigned long start, unsigned long end, struct mm_walk *walk) { struct pagemap_scan_private *p = walk->private; struct vm_area_struct *vma = walk->vma; unsigned long vma_category = 0; bool wp_allowed = userfaultfd_wp_async(vma) && userfaultfd_wp_use_markers(vma); if (!wp_allowed) { /* User requested explicit failure over wp-async capability */ if (p->arg.flags & PM_SCAN_CHECK_WPASYNC) return -EPERM; /* * User requires wr-protect, and allows silently skipping * unsupported vmas. */ if (p->arg.flags & PM_SCAN_WP_MATCHING) return 1; /* * Then the request doesn't involve wr-protects at all, * fall through to the rest checks, and allow vma walk. */ } if (vma->vm_flags & VM_PFNMAP) return 1; if (wp_allowed) vma_category |= PAGE_IS_WPALLOWED; if (vma->vm_flags & VM_SOFTDIRTY) vma_category |= PAGE_IS_SOFT_DIRTY; if (!pagemap_scan_is_interesting_vma(vma_category, p)) return 1; p->cur_vma_category = vma_category; return 0; } static bool pagemap_scan_push_range(unsigned long categories, struct pagemap_scan_private *p, unsigned long addr, unsigned long end) { struct page_region *cur_buf = &p->vec_buf[p->vec_buf_index]; /* * When there is no output buffer provided at all, the sentinel values * won't match here. There is no other way for `cur_buf->end` to be * non-zero other than it being non-empty. */ if (addr == cur_buf->end && categories == cur_buf->categories) { cur_buf->end = end; return true; } if (cur_buf->end) { if (p->vec_buf_index >= p->vec_buf_len - 1) return false; cur_buf = &p->vec_buf[++p->vec_buf_index]; } cur_buf->start = addr; cur_buf->end = end; cur_buf->categories = categories; return true; } static int pagemap_scan_output(unsigned long categories, struct pagemap_scan_private *p, unsigned long addr, unsigned long *end) { unsigned long n_pages, total_pages; int ret = 0; if (!p->vec_buf) return 0; categories &= p->arg.return_mask; n_pages = (*end - addr) / PAGE_SIZE; if (check_add_overflow(p->found_pages, n_pages, &total_pages) || total_pages > p->arg.max_pages) { size_t n_too_much = total_pages - p->arg.max_pages; *end -= n_too_much * PAGE_SIZE; n_pages -= n_too_much; ret = -ENOSPC; } if (!pagemap_scan_push_range(categories, p, addr, *end)) { *end = addr; n_pages = 0; ret = -ENOSPC; } p->found_pages += n_pages; if (ret) p->arg.walk_end = *end; return ret; } static int pagemap_scan_thp_entry(pmd_t *pmd, unsigned long start, unsigned long end, struct mm_walk *walk) { #ifdef CONFIG_TRANSPARENT_HUGEPAGE struct pagemap_scan_private *p = walk->private; struct vm_area_struct *vma = walk->vma; unsigned long categories; spinlock_t *ptl; int ret = 0; ptl = pmd_trans_huge_lock(pmd, vma); if (!ptl) return -ENOENT; categories = p->cur_vma_category | pagemap_thp_category(p, vma, start, *pmd); if (!pagemap_scan_is_interesting_page(categories, p)) goto out_unlock; ret = pagemap_scan_output(categories, p, start, &end); if (start == end) goto out_unlock; if (~p->arg.flags & PM_SCAN_WP_MATCHING) goto out_unlock; if (~categories & PAGE_IS_WRITTEN) goto out_unlock; /* * Break huge page into small pages if the WP operation * needs to be performed on a portion of the huge page. */ if (end != start + HPAGE_SIZE) { spin_unlock(ptl); split_huge_pmd(vma, pmd, start); pagemap_scan_backout_range(p, start, end); /* Report as if there was no THP */ return -ENOENT; } make_uffd_wp_pmd(vma, start, pmd); flush_tlb_range(vma, start, end); out_unlock: spin_unlock(ptl); return ret; #else /* !CONFIG_TRANSPARENT_HUGEPAGE */ return -ENOENT; #endif } static int pagemap_scan_pmd_entry(pmd_t *pmd, unsigned long start, unsigned long end, struct mm_walk *walk) { struct pagemap_scan_private *p = walk->private; struct vm_area_struct *vma = walk->vma; unsigned long addr, flush_end = 0; pte_t *pte, *start_pte; spinlock_t *ptl; int ret; ret = pagemap_scan_thp_entry(pmd, start, end, walk); if (ret != -ENOENT) return ret; ret = 0; start_pte = pte = pte_offset_map_lock(vma->vm_mm, pmd, start, &ptl); if (!pte) { walk->action = ACTION_AGAIN; return 0; } arch_enter_lazy_mmu_mode(); if ((p->arg.flags & PM_SCAN_WP_MATCHING) && !p->vec_out) { /* Fast path for performing exclusive WP */ for (addr = start; addr != end; pte++, addr += PAGE_SIZE) { pte_t ptent = ptep_get(pte); if ((pte_present(ptent) && pte_uffd_wp(ptent)) || pte_swp_uffd_wp_any(ptent)) continue; make_uffd_wp_pte(vma, addr, pte, ptent); if (!flush_end) start = addr; flush_end = addr + PAGE_SIZE; } goto flush_and_return; } if (!p->arg.category_anyof_mask && !p->arg.category_inverted && p->arg.category_mask == PAGE_IS_WRITTEN && p->arg.return_mask == PAGE_IS_WRITTEN) { for (addr = start; addr < end; pte++, addr += PAGE_SIZE) { unsigned long next = addr + PAGE_SIZE; pte_t ptent = ptep_get(pte); if ((pte_present(ptent) && pte_uffd_wp(ptent)) || pte_swp_uffd_wp_any(ptent)) continue; ret = pagemap_scan_output(p->cur_vma_category | PAGE_IS_WRITTEN, p, addr, &next); if (next == addr) break; if (~p->arg.flags & PM_SCAN_WP_MATCHING) continue; make_uffd_wp_pte(vma, addr, pte, ptent); if (!flush_end) start = addr; flush_end = next; } goto flush_and_return; } for (addr = start; addr != end; pte++, addr += PAGE_SIZE) { pte_t ptent = ptep_get(pte); unsigned long categories = p->cur_vma_category | pagemap_page_category(p, vma, addr, ptent); unsigned long next = addr + PAGE_SIZE; if (!pagemap_scan_is_interesting_page(categories, p)) continue; ret = pagemap_scan_output(categories, p, addr, &next); if (next == addr) break; if (~p->arg.flags & PM_SCAN_WP_MATCHING) continue; if (~categories & PAGE_IS_WRITTEN) continue; make_uffd_wp_pte(vma, addr, pte, ptent); if (!flush_end) start = addr; flush_end = next; } flush_and_return: if (flush_end) flush_tlb_range(vma, start, addr); arch_leave_lazy_mmu_mode(); pte_unmap_unlock(start_pte, ptl); cond_resched(); return ret; } #ifdef CONFIG_HUGETLB_PAGE static int pagemap_scan_hugetlb_entry(pte_t *ptep, unsigned long hmask, unsigned long start, unsigned long end, struct mm_walk *walk) { struct pagemap_scan_private *p = walk->private; struct vm_area_struct *vma = walk->vma; unsigned long categories; spinlock_t *ptl; int ret = 0; pte_t pte; if (~p->arg.flags & PM_SCAN_WP_MATCHING) { /* Go the short route when not write-protecting pages. */ pte = huge_ptep_get(walk->mm, start, ptep); categories = p->cur_vma_category | pagemap_hugetlb_category(pte); if (!pagemap_scan_is_interesting_page(categories, p)) return 0; return pagemap_scan_output(categories, p, start, &end); } i_mmap_lock_write(vma->vm_file->f_mapping); ptl = huge_pte_lock(hstate_vma(vma), vma->vm_mm, ptep); pte = huge_ptep_get(walk->mm, start, ptep); categories = p->cur_vma_category | pagemap_hugetlb_category(pte); if (!pagemap_scan_is_interesting_page(categories, p)) goto out_unlock; ret = pagemap_scan_output(categories, p, start, &end); if (start == end) goto out_unlock; if (~categories & PAGE_IS_WRITTEN) goto out_unlock; if (end != start + HPAGE_SIZE) { /* Partial HugeTLB page WP isn't possible. */ pagemap_scan_backout_range(p, start, end); p->arg.walk_end = start; ret = 0; goto out_unlock; } make_uffd_wp_huge_pte(vma, start, ptep, pte); flush_hugetlb_tlb_range(vma, start, end); out_unlock: spin_unlock(ptl); i_mmap_unlock_write(vma->vm_file->f_mapping); return ret; } #else #define pagemap_scan_hugetlb_entry NULL #endif static int pagemap_scan_pte_hole(unsigned long addr, unsigned long end, int depth, struct mm_walk *walk) { struct pagemap_scan_private *p = walk->private; struct vm_area_struct *vma = walk->vma; int ret, err; if (!vma || !pagemap_scan_is_interesting_page(p->cur_vma_category, p)) return 0; ret = pagemap_scan_output(p->cur_vma_category, p, addr, &end); if (addr == end) return ret; if (~p->arg.flags & PM_SCAN_WP_MATCHING) return ret; err = uffd_wp_range(vma, addr, end - addr, true); if (err < 0) ret = err; return ret; } static const struct mm_walk_ops pagemap_scan_ops = { .test_walk = pagemap_scan_test_walk, .pmd_entry = pagemap_scan_pmd_entry, .pte_hole = pagemap_scan_pte_hole, .hugetlb_entry = pagemap_scan_hugetlb_entry, }; static int pagemap_scan_get_args(struct pm_scan_arg *arg, unsigned long uarg) { if (copy_from_user(arg, (void __user *)uarg, sizeof(*arg))) return -EFAULT; if (arg->size != sizeof(struct pm_scan_arg)) return -EINVAL; /* Validate requested features */ if (arg->flags & ~PM_SCAN_FLAGS) return -EINVAL; if ((arg->category_inverted | arg->category_mask | arg->category_anyof_mask | arg->return_mask) & ~PM_SCAN_CATEGORIES) return -EINVAL; arg->start = untagged_addr((unsigned long)arg->start); arg->end = untagged_addr((unsigned long)arg->end); arg->vec = untagged_addr((unsigned long)arg->vec); /* Validate memory pointers */ if (!IS_ALIGNED(arg->start, PAGE_SIZE)) return -EINVAL; if (!access_ok((void __user *)(long)arg->start, arg->end - arg->start)) return -EFAULT; if (!arg->vec && arg->vec_len) return -EINVAL; if (UINT_MAX == SIZE_MAX && arg->vec_len > SIZE_MAX) return -EINVAL; if (arg->vec && !access_ok((void __user *)(long)arg->vec, size_mul(arg->vec_len, sizeof(struct page_region)))) return -EFAULT; /* Fixup default values */ arg->end = ALIGN(arg->end, PAGE_SIZE); arg->walk_end = 0; if (!arg->max_pages) arg->max_pages = ULONG_MAX; return 0; } static int pagemap_scan_writeback_args(struct pm_scan_arg *arg, unsigned long uargl) { struct pm_scan_arg __user *uarg = (void __user *)uargl; if (copy_to_user(&uarg->walk_end, &arg->walk_end, sizeof(arg->walk_end))) return -EFAULT; return 0; } static int pagemap_scan_init_bounce_buffer(struct pagemap_scan_private *p) { if (!p->arg.vec_len) return 0; p->vec_buf_len = min_t(size_t, PAGEMAP_WALK_SIZE >> PAGE_SHIFT, p->arg.vec_len); p->vec_buf = kmalloc_array(p->vec_buf_len, sizeof(*p->vec_buf), GFP_KERNEL); if (!p->vec_buf) return -ENOMEM; p->vec_buf->start = p->vec_buf->end = 0; p->vec_out = (struct page_region __user *)(long)p->arg.vec; return 0; } static long pagemap_scan_flush_buffer(struct pagemap_scan_private *p) { const struct page_region *buf = p->vec_buf; long n = p->vec_buf_index; if (!p->vec_buf) return 0; if (buf[n].end != buf[n].start) n++; if (!n) return 0; if (copy_to_user(p->vec_out, buf, n * sizeof(*buf))) return -EFAULT; p->arg.vec_len -= n; p->vec_out += n; p->vec_buf_index = 0; p->vec_buf_len = min_t(size_t, p->vec_buf_len, p->arg.vec_len); p->vec_buf->start = p->vec_buf->end = 0; return n; } static long do_pagemap_scan(struct mm_struct *mm, unsigned long uarg) { struct pagemap_scan_private p = {0}; unsigned long walk_start; size_t n_ranges_out = 0; int ret; ret = pagemap_scan_get_args(&p.arg, uarg); if (ret) return ret; p.masks_of_interest = p.arg.category_mask | p.arg.category_anyof_mask | p.arg.return_mask; ret = pagemap_scan_init_bounce_buffer(&p); if (ret) return ret; for (walk_start = p.arg.start; walk_start < p.arg.end; walk_start = p.arg.walk_end) { struct mmu_notifier_range range; long n_out; if (fatal_signal_pending(current)) { ret = -EINTR; break; } ret = mmap_read_lock_killable(mm); if (ret) break; /* Protection change for the range is going to happen. */ if (p.arg.flags & PM_SCAN_WP_MATCHING) { mmu_notifier_range_init(&range, MMU_NOTIFY_PROTECTION_VMA, 0, mm, walk_start, p.arg.end); mmu_notifier_invalidate_range_start(&range); } ret = walk_page_range(mm, walk_start, p.arg.end, &pagemap_scan_ops, &p); if (p.arg.flags & PM_SCAN_WP_MATCHING) mmu_notifier_invalidate_range_end(&range); mmap_read_unlock(mm); n_out = pagemap_scan_flush_buffer(&p); if (n_out < 0) ret = n_out; else n_ranges_out += n_out; if (ret != -ENOSPC) break; if (p.arg.vec_len == 0 || p.found_pages == p.arg.max_pages) break; } /* ENOSPC signifies early stop (buffer full) from the walk. */ if (!ret || ret == -ENOSPC) ret = n_ranges_out; /* The walk_end isn't set when ret is zero */ if (!p.arg.walk_end) p.arg.walk_end = p.arg.end; if (pagemap_scan_writeback_args(&p.arg, uarg)) ret = -EFAULT; kfree(p.vec_buf); return ret; } static long do_pagemap_cmd(struct file *file, unsigned int cmd, unsigned long arg) { struct mm_struct *mm = file->private_data; switch (cmd) { case PAGEMAP_SCAN: return do_pagemap_scan(mm, arg); default: return -EINVAL; } } const struct file_operations proc_pagemap_operations = { .llseek = mem_lseek, /* borrow this */ .read = pagemap_read, .open = pagemap_open, .release = pagemap_release, .unlocked_ioctl = do_pagemap_cmd, .compat_ioctl = do_pagemap_cmd, }; #endif /* CONFIG_PROC_PAGE_MONITOR */ #ifdef CONFIG_NUMA struct numa_maps { unsigned long pages; unsigned long anon; unsigned long active; unsigned long writeback; unsigned long mapcount_max; unsigned long dirty; unsigned long swapcache; unsigned long node[MAX_NUMNODES]; }; struct numa_maps_private { struct proc_maps_private proc_maps; struct numa_maps md; }; static void gather_stats(struct page *page, struct numa_maps *md, int pte_dirty, unsigned long nr_pages) { struct folio *folio = page_folio(page); int count; if (IS_ENABLED(CONFIG_PAGE_MAPCOUNT)) count = folio_precise_page_mapcount(folio, page); else count = folio_average_page_mapcount(folio); md->pages += nr_pages; if (pte_dirty || folio_test_dirty(folio)) md->dirty += nr_pages; if (folio_test_swapcache(folio)) md->swapcache += nr_pages; if (folio_test_active(folio) || folio_test_unevictable(folio)) md->active += nr_pages; if (folio_test_writeback(folio)) md->writeback += nr_pages; if (folio_test_anon(folio)) md->anon += nr_pages; if (count > md->mapcount_max) md->mapcount_max = count; md->node[folio_nid(folio)] += nr_pages; } static struct page *can_gather_numa_stats(pte_t pte, struct vm_area_struct *vma, unsigned long addr) { struct page *page; int nid; if (!pte_present(pte)) return NULL; page = vm_normal_page(vma, addr, pte); if (!page || is_zone_device_page(page)) return NULL; if (PageReserved(page)) return NULL; nid = page_to_nid(page); if (!node_isset(nid, node_states[N_MEMORY])) return NULL; return page; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE static struct page *can_gather_numa_stats_pmd(pmd_t pmd, struct vm_area_struct *vma, unsigned long addr) { struct page *page; int nid; if (!pmd_present(pmd)) return NULL; page = vm_normal_page_pmd(vma, addr, pmd); if (!page) return NULL; if (PageReserved(page)) return NULL; nid = page_to_nid(page); if (!node_isset(nid, node_states[N_MEMORY])) return NULL; return page; } #endif static int gather_pte_stats(pmd_t *pmd, unsigned long addr, unsigned long end, struct mm_walk *walk) { struct numa_maps *md = walk->private; struct vm_area_struct *vma = walk->vma; spinlock_t *ptl; pte_t *orig_pte; pte_t *pte; #ifdef CONFIG_TRANSPARENT_HUGEPAGE ptl = pmd_trans_huge_lock(pmd, vma); if (ptl) { struct page *page; page = can_gather_numa_stats_pmd(*pmd, vma, addr); if (page) gather_stats(page, md, pmd_dirty(*pmd), HPAGE_PMD_SIZE/PAGE_SIZE); spin_unlock(ptl); return 0; } #endif orig_pte = pte = pte_offset_map_lock(walk->mm, pmd, addr, &ptl); if (!pte) { walk->action = ACTION_AGAIN; return 0; } do { pte_t ptent = ptep_get(pte); struct page *page = can_gather_numa_stats(ptent, vma, addr); if (!page) continue; gather_stats(page, md, pte_dirty(ptent), 1); } while (pte++, addr += PAGE_SIZE, addr != end); pte_unmap_unlock(orig_pte, ptl); cond_resched(); return 0; } #ifdef CONFIG_HUGETLB_PAGE static int gather_hugetlb_stats(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { pte_t huge_pte = huge_ptep_get(walk->mm, addr, pte); struct numa_maps *md; struct page *page; if (!pte_present(huge_pte)) return 0; page = pte_page(huge_pte); md = walk->private; gather_stats(page, md, pte_dirty(huge_pte), 1); return 0; } #else static int gather_hugetlb_stats(pte_t *pte, unsigned long hmask, unsigned long addr, unsigned long end, struct mm_walk *walk) { return 0; } #endif static const struct mm_walk_ops show_numa_ops = { .hugetlb_entry = gather_hugetlb_stats, .pmd_entry = gather_pte_stats, .walk_lock = PGWALK_RDLOCK, }; /* * Display pages allocated per node and memory policy via /proc. */ static int show_numa_map(struct seq_file *m, void *v) { struct numa_maps_private *numa_priv = m->private; struct proc_maps_private *proc_priv = &numa_priv->proc_maps; struct vm_area_struct *vma = v; struct numa_maps *md = &numa_priv->md; struct file *file = vma->vm_file; struct mm_struct *mm = vma->vm_mm; char buffer[64]; struct mempolicy *pol; pgoff_t ilx; int nid; if (!mm) return 0; /* Ensure we start with an empty set of numa_maps statistics. */ memset(md, 0, sizeof(*md)); pol = __get_vma_policy(vma, vma->vm_start, &ilx); if (pol) { mpol_to_str(buffer, sizeof(buffer), pol); mpol_cond_put(pol); } else { mpol_to_str(buffer, sizeof(buffer), proc_priv->task_mempolicy); } seq_printf(m, "%08lx %s", vma->vm_start, buffer); if (file) { seq_puts(m, " file="); seq_path(m, file_user_path(file), "\n\t= "); } else if (vma_is_initial_heap(vma)) { seq_puts(m, " heap"); } else if (vma_is_initial_stack(vma)) { seq_puts(m, " stack"); } if (is_vm_hugetlb_page(vma)) seq_puts(m, " huge"); /* mmap_lock is held by m_start */ walk_page_vma(vma, &show_numa_ops, md); if (!md->pages) goto out; if (md->anon) seq_printf(m, " anon=%lu", md->anon); if (md->dirty) seq_printf(m, " dirty=%lu", md->dirty); if (md->pages != md->anon && md->pages != md->dirty) seq_printf(m, " mapped=%lu", md->pages); if (md->mapcount_max > 1) seq_printf(m, " mapmax=%lu", md->mapcount_max); if (md->swapcache) seq_printf(m, " swapcache=%lu", md->swapcache); if (md->active < md->pages && !is_vm_hugetlb_page(vma)) seq_printf(m, " active=%lu", md->active); if (md->writeback) seq_printf(m, " writeback=%lu", md->writeback); for_each_node_state(nid, N_MEMORY) if (md->node[nid]) seq_printf(m, " N%d=%lu", nid, md->node[nid]); seq_printf(m, " kernelpagesize_kB=%lu", vma_kernel_pagesize(vma) >> 10); out: seq_putc(m, '\n'); return 0; } static const struct seq_operations proc_pid_numa_maps_op = { .start = m_start, .next = m_next, .stop = m_stop, .show = show_numa_map, }; static int pid_numa_maps_open(struct inode *inode, struct file *file) { return proc_maps_open(inode, file, &proc_pid_numa_maps_op, sizeof(struct numa_maps_private)); } const struct file_operations proc_pid_numa_maps_operations = { .open = pid_numa_maps_open, .read = seq_read, .llseek = seq_lseek, .release = proc_map_release, }; #endif /* CONFIG_NUMA */ |
137 137 137 78 136 137 133 133 132 132 71 132 79 79 79 137 137 72 137 137 136 137 82 98 82 82 82 82 82 82 82 122 123 123 123 123 123 123 123 123 123 123 123 79 6 80 79 77 80 14 14 137 136 136 81 80 80 80 71 80 73 137 139 139 139 139 139 139 139 6 6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 | // SPDX-License-Identifier: GPL-2.0-or-later /* * Squashfs - a compressed read only filesystem for Linux * * Copyright (c) 2002, 2003, 2004, 2005, 2006, 2007, 2008 * Phillip Lougher <phillip@squashfs.org.uk> * * cache.c */ /* * Blocks in Squashfs are compressed. To avoid repeatedly decompressing * recently accessed data Squashfs uses two small metadata and fragment caches. * * This file implements a generic cache implementation used for both caches, * plus functions layered ontop of the generic cache implementation to * access the metadata and fragment caches. * * To avoid out of memory and fragmentation issues with vmalloc the cache * uses sequences of kmalloced PAGE_SIZE buffers. * * It should be noted that the cache is not used for file datablocks, these * are decompressed and cached in the page-cache in the normal way. The * cache is only used to temporarily cache fragment and metadata blocks * which have been read as as a result of a metadata (i.e. inode or * directory) or fragment access. Because metadata and fragments are packed * together into blocks (to gain greater compression) the read of a particular * piece of metadata or fragment will retrieve other metadata/fragments which * have been packed with it, these because of locality-of-reference may be read * in the near future. Temporarily caching them ensures they are available for * near future access without requiring an additional read and decompress. */ #include <linux/fs.h> #include <linux/vfs.h> #include <linux/slab.h> #include <linux/vmalloc.h> #include <linux/sched.h> #include <linux/spinlock.h> #include <linux/wait.h> #include <linux/pagemap.h> #include "squashfs_fs.h" #include "squashfs_fs_sb.h" #include "squashfs.h" #include "page_actor.h" /* * Look-up block in cache, and increment usage count. If not in cache, read * and decompress it from disk. */ struct squashfs_cache_entry *squashfs_cache_get(struct super_block *sb, struct squashfs_cache *cache, u64 block, int length) { int i, n; struct squashfs_cache_entry *entry; spin_lock(&cache->lock); while (1) { for (i = cache->curr_blk, n = 0; n < cache->entries; n++) { if (cache->entry[i].block == block) { cache->curr_blk = i; break; } i = (i + 1) % cache->entries; } if (n == cache->entries) { /* * Block not in cache, if all cache entries are used * go to sleep waiting for one to become available. */ if (cache->unused == 0) { cache->num_waiters++; spin_unlock(&cache->lock); wait_event(cache->wait_queue, cache->unused); spin_lock(&cache->lock); cache->num_waiters--; continue; } /* * At least one unused cache entry. A simple * round-robin strategy is used to choose the entry to * be evicted from the cache. */ i = cache->next_blk; for (n = 0; n < cache->entries; n++) { if (cache->entry[i].refcount == 0) break; i = (i + 1) % cache->entries; } cache->next_blk = (i + 1) % cache->entries; entry = &cache->entry[i]; /* * Initialise chosen cache entry, and fill it in from * disk. */ cache->unused--; entry->block = block; entry->refcount = 1; entry->pending = 1; entry->num_waiters = 0; entry->error = 0; spin_unlock(&cache->lock); entry->length = squashfs_read_data(sb, block, length, &entry->next_index, entry->actor); spin_lock(&cache->lock); if (entry->length < 0) entry->error = entry->length; entry->pending = 0; /* * While filling this entry one or more other processes * have looked it up in the cache, and have slept * waiting for it to become available. */ if (entry->num_waiters) { spin_unlock(&cache->lock); wake_up_all(&entry->wait_queue); } else spin_unlock(&cache->lock); goto out; } /* * Block already in cache. Increment refcount so it doesn't * get reused until we're finished with it, if it was * previously unused there's one less cache entry available * for reuse. */ entry = &cache->entry[i]; if (entry->refcount == 0) cache->unused--; entry->refcount++; /* * If the entry is currently being filled in by another process * go to sleep waiting for it to become available. */ if (entry->pending) { entry->num_waiters++; spin_unlock(&cache->lock); wait_event(entry->wait_queue, !entry->pending); } else spin_unlock(&cache->lock); goto out; } out: TRACE("Got %s %d, start block %lld, refcount %d, error %d\n", cache->name, i, entry->block, entry->refcount, entry->error); if (entry->error) ERROR("Unable to read %s cache entry [%llx]\n", cache->name, block); return entry; } /* * Release cache entry, once usage count is zero it can be reused. */ void squashfs_cache_put(struct squashfs_cache_entry *entry) { struct squashfs_cache *cache = entry->cache; spin_lock(&cache->lock); entry->refcount--; if (entry->refcount == 0) { cache->unused++; /* * If there's any processes waiting for a block to become * available, wake one up. */ if (cache->num_waiters) { spin_unlock(&cache->lock); wake_up(&cache->wait_queue); return; } } spin_unlock(&cache->lock); } /* * Delete cache reclaiming all kmalloced buffers. */ void squashfs_cache_delete(struct squashfs_cache *cache) { int i, j; if (IS_ERR(cache) || cache == NULL) return; for (i = 0; i < cache->entries; i++) { if (cache->entry[i].data) { for (j = 0; j < cache->pages; j++) kfree(cache->entry[i].data[j]); kfree(cache->entry[i].data); } kfree(cache->entry[i].actor); } kfree(cache->entry); kfree(cache); } /* * Initialise cache allocating the specified number of entries, each of * size block_size. To avoid vmalloc fragmentation issues each entry * is allocated as a sequence of kmalloced PAGE_SIZE buffers. */ struct squashfs_cache *squashfs_cache_init(char *name, int entries, int block_size) { int i, j; struct squashfs_cache *cache; if (entries == 0) return NULL; cache = kzalloc(sizeof(*cache), GFP_KERNEL); if (cache == NULL) { ERROR("Failed to allocate %s cache\n", name); return ERR_PTR(-ENOMEM); } cache->entry = kcalloc(entries, sizeof(*(cache->entry)), GFP_KERNEL); if (cache->entry == NULL) { ERROR("Failed to allocate %s cache\n", name); goto cleanup; } cache->curr_blk = 0; cache->next_blk = 0; cache->unused = entries; cache->entries = entries; cache->block_size = block_size; cache->pages = block_size >> PAGE_SHIFT; cache->pages = cache->pages ? cache->pages : 1; cache->name = name; cache->num_waiters = 0; spin_lock_init(&cache->lock); init_waitqueue_head(&cache->wait_queue); for (i = 0; i < entries; i++) { struct squashfs_cache_entry *entry = &cache->entry[i]; init_waitqueue_head(&cache->entry[i].wait_queue); entry->cache = cache; entry->block = SQUASHFS_INVALID_BLK; entry->data = kcalloc(cache->pages, sizeof(void *), GFP_KERNEL); if (entry->data == NULL) { ERROR("Failed to allocate %s cache entry\n", name); goto cleanup; } for (j = 0; j < cache->pages; j++) { entry->data[j] = kmalloc(PAGE_SIZE, GFP_KERNEL); if (entry->data[j] == NULL) { ERROR("Failed to allocate %s buffer\n", name); goto cleanup; } } entry->actor = squashfs_page_actor_init(entry->data, cache->pages, 0); if (entry->actor == NULL) { ERROR("Failed to allocate %s cache entry\n", name); goto cleanup; } } return cache; cleanup: squashfs_cache_delete(cache); return ERR_PTR(-ENOMEM); } /* * Copy up to length bytes from cache entry to buffer starting at offset bytes * into the cache entry. If there's not length bytes then copy the number of * bytes available. In all cases return the number of bytes copied. */ int squashfs_copy_data(void *buffer, struct squashfs_cache_entry *entry, int offset, int length) { int remaining = length; if (length == 0) return 0; else if (buffer == NULL) return min(length, entry->length - offset); while (offset < entry->length) { void *buff = entry->data[offset / PAGE_SIZE] + (offset % PAGE_SIZE); int bytes = min_t(int, entry->length - offset, PAGE_SIZE - (offset % PAGE_SIZE)); if (bytes >= remaining) { memcpy(buffer, buff, remaining); remaining = 0; break; } memcpy(buffer, buff, bytes); buffer += bytes; remaining -= bytes; offset += bytes; } return length - remaining; } /* * Read length bytes from metadata position <block, offset> (block is the * start of the compressed block on disk, and offset is the offset into * the block once decompressed). Data is packed into consecutive blocks, * and length bytes may require reading more than one block. */ int squashfs_read_metadata(struct super_block *sb, void *buffer, u64 *block, int *offset, int length) { struct squashfs_sb_info *msblk = sb->s_fs_info; int bytes, res = length; struct squashfs_cache_entry *entry; TRACE("Entered squashfs_read_metadata [%llx:%x]\n", *block, *offset); if (unlikely(length < 0)) return -EIO; while (length) { entry = squashfs_cache_get(sb, msblk->block_cache, *block, 0); if (entry->error) { res = entry->error; goto error; } else if (*offset >= entry->length) { res = -EIO; goto error; } bytes = squashfs_copy_data(buffer, entry, *offset, length); if (buffer) buffer += bytes; length -= bytes; *offset += bytes; if (*offset == entry->length) { *block = entry->next_index; *offset = 0; } squashfs_cache_put(entry); } return res; error: squashfs_cache_put(entry); return res; } /* * Look-up in the fragmment cache the fragment located at <start_block> in the * filesystem. If necessary read and decompress it from disk. */ struct squashfs_cache_entry *squashfs_get_fragment(struct super_block *sb, u64 start_block, int length) { struct squashfs_sb_info *msblk = sb->s_fs_info; return squashfs_cache_get(sb, msblk->fragment_cache, start_block, length); } /* * Read and decompress the datablock located at <start_block> in the * filesystem. The cache is used here to avoid duplicating locking and * read/decompress code. */ struct squashfs_cache_entry *squashfs_get_datablock(struct super_block *sb, u64 start_block, int length) { struct squashfs_sb_info *msblk = sb->s_fs_info; return squashfs_cache_get(sb, msblk->read_page, start_block, length); } /* * Read a filesystem table (uncompressed sequence of bytes) from disk */ void *squashfs_read_table(struct super_block *sb, u64 block, int length) { int pages = (length + PAGE_SIZE - 1) >> PAGE_SHIFT; int i, res; void *table, *buffer, **data; struct squashfs_page_actor *actor; table = buffer = kmalloc(length, GFP_KERNEL); if (table == NULL) return ERR_PTR(-ENOMEM); data = kcalloc(pages, sizeof(void *), GFP_KERNEL); if (data == NULL) { res = -ENOMEM; goto failed; } actor = squashfs_page_actor_init(data, pages, length); if (actor == NULL) { res = -ENOMEM; goto failed2; } for (i = 0; i < pages; i++, buffer += PAGE_SIZE) data[i] = buffer; res = squashfs_read_data(sb, block, length | SQUASHFS_COMPRESSED_BIT_BLOCK, NULL, actor); kfree(data); kfree(actor); if (res < 0) goto failed; return table; failed2: kfree(data); failed: kfree(table); return ERR_PTR(res); } |
5 5 5 5 5 5 1 1 1 1 1 5 1 5 5 5 5 5 5 9 9 5 5 5 1 2 3 1 4 1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 | // SPDX-License-Identifier: GPL-2.0-only /* * Line 6 Linux USB driver * * Copyright (C) 2004-2010 Markus Grabner (line6@grabner-graz.at) */ #include <linux/slab.h> #include <linux/export.h> #include <sound/core.h> #include <sound/control.h> #include <sound/pcm.h> #include <sound/pcm_params.h> #include "capture.h" #include "driver.h" #include "playback.h" /* impulse response volume controls */ static int snd_line6_impulse_volume_info(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_info *uinfo) { uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER; uinfo->count = 1; uinfo->value.integer.min = 0; uinfo->value.integer.max = 255; return 0; } static int snd_line6_impulse_volume_get(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_value *ucontrol) { struct snd_line6_pcm *line6pcm = snd_kcontrol_chip(kcontrol); ucontrol->value.integer.value[0] = line6pcm->impulse_volume; return 0; } static int snd_line6_impulse_volume_put(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_value *ucontrol) { struct snd_line6_pcm *line6pcm = snd_kcontrol_chip(kcontrol); int value = ucontrol->value.integer.value[0]; int err; if (line6pcm->impulse_volume == value) return 0; line6pcm->impulse_volume = value; if (value > 0) { err = line6_pcm_acquire(line6pcm, LINE6_STREAM_IMPULSE, true); if (err < 0) { line6pcm->impulse_volume = 0; return err; } } else { line6_pcm_release(line6pcm, LINE6_STREAM_IMPULSE); } return 1; } /* impulse response period controls */ static int snd_line6_impulse_period_info(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_info *uinfo) { uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER; uinfo->count = 1; uinfo->value.integer.min = 0; uinfo->value.integer.max = 2000; return 0; } static int snd_line6_impulse_period_get(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_value *ucontrol) { struct snd_line6_pcm *line6pcm = snd_kcontrol_chip(kcontrol); ucontrol->value.integer.value[0] = line6pcm->impulse_period; return 0; } static int snd_line6_impulse_period_put(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_value *ucontrol) { struct snd_line6_pcm *line6pcm = snd_kcontrol_chip(kcontrol); int value = ucontrol->value.integer.value[0]; if (line6pcm->impulse_period == value) return 0; line6pcm->impulse_period = value; return 1; } /* Unlink all currently active URBs. */ static void line6_unlink_audio_urbs(struct snd_line6_pcm *line6pcm, struct line6_pcm_stream *pcms) { int i; for (i = 0; i < line6pcm->line6->iso_buffers; i++) { if (test_bit(i, &pcms->active_urbs)) { if (!test_and_set_bit(i, &pcms->unlink_urbs)) usb_unlink_urb(pcms->urbs[i]); } } } /* Wait until unlinking of all currently active URBs has been finished. */ static void line6_wait_clear_audio_urbs(struct snd_line6_pcm *line6pcm, struct line6_pcm_stream *pcms) { int timeout = HZ; int i; int alive; do { alive = 0; for (i = 0; i < line6pcm->line6->iso_buffers; i++) { if (test_bit(i, &pcms->active_urbs)) alive++; } if (!alive) break; set_current_state(TASK_UNINTERRUPTIBLE); schedule_timeout(1); } while (--timeout > 0); if (alive) dev_err(line6pcm->line6->ifcdev, "timeout: still %d active urbs..\n", alive); } static inline struct line6_pcm_stream * get_stream(struct snd_line6_pcm *line6pcm, int direction) { return (direction == SNDRV_PCM_STREAM_PLAYBACK) ? &line6pcm->out : &line6pcm->in; } /* allocate a buffer if not opened yet; * call this in line6pcm.state_mutex */ static int line6_buffer_acquire(struct snd_line6_pcm *line6pcm, struct line6_pcm_stream *pstr, int direction, int type) { const int pkt_size = (direction == SNDRV_PCM_STREAM_PLAYBACK) ? line6pcm->max_packet_size_out : line6pcm->max_packet_size_in; /* Invoked multiple times in a row so allocate once only */ if (!test_and_set_bit(type, &pstr->opened) && !pstr->buffer) { pstr->buffer = kmalloc(array3_size(line6pcm->line6->iso_buffers, LINE6_ISO_PACKETS, pkt_size), GFP_KERNEL); if (!pstr->buffer) return -ENOMEM; } return 0; } /* free a buffer if all streams are closed; * call this in line6pcm.state_mutex */ static void line6_buffer_release(struct snd_line6_pcm *line6pcm, struct line6_pcm_stream *pstr, int type) { clear_bit(type, &pstr->opened); if (!pstr->opened) { line6_wait_clear_audio_urbs(line6pcm, pstr); kfree(pstr->buffer); pstr->buffer = NULL; } } /* start a PCM stream */ static int line6_stream_start(struct snd_line6_pcm *line6pcm, int direction, int type) { unsigned long flags; struct line6_pcm_stream *pstr = get_stream(line6pcm, direction); int ret = 0; spin_lock_irqsave(&pstr->lock, flags); if (!test_and_set_bit(type, &pstr->running) && !(pstr->active_urbs || pstr->unlink_urbs)) { pstr->count = 0; /* Submit all currently available URBs */ if (direction == SNDRV_PCM_STREAM_PLAYBACK) ret = line6_submit_audio_out_all_urbs(line6pcm); else ret = line6_submit_audio_in_all_urbs(line6pcm); } if (ret < 0) clear_bit(type, &pstr->running); spin_unlock_irqrestore(&pstr->lock, flags); return ret; } /* stop a PCM stream; this doesn't sync with the unlinked URBs */ static void line6_stream_stop(struct snd_line6_pcm *line6pcm, int direction, int type) { unsigned long flags; struct line6_pcm_stream *pstr = get_stream(line6pcm, direction); spin_lock_irqsave(&pstr->lock, flags); clear_bit(type, &pstr->running); if (!pstr->running) { spin_unlock_irqrestore(&pstr->lock, flags); line6_unlink_audio_urbs(line6pcm, pstr); spin_lock_irqsave(&pstr->lock, flags); if (direction == SNDRV_PCM_STREAM_CAPTURE) { line6pcm->prev_fbuf = NULL; line6pcm->prev_fsize = 0; } } spin_unlock_irqrestore(&pstr->lock, flags); } /* common PCM trigger callback */ int snd_line6_trigger(struct snd_pcm_substream *substream, int cmd) { struct snd_line6_pcm *line6pcm = snd_pcm_substream_chip(substream); struct snd_pcm_substream *s; int err; clear_bit(LINE6_FLAG_PREPARED, &line6pcm->flags); snd_pcm_group_for_each_entry(s, substream) { if (s->pcm->card != substream->pcm->card) continue; switch (cmd) { case SNDRV_PCM_TRIGGER_START: case SNDRV_PCM_TRIGGER_RESUME: if (s->stream == SNDRV_PCM_STREAM_CAPTURE && (line6pcm->line6->properties->capabilities & LINE6_CAP_IN_NEEDS_OUT)) { err = line6_stream_start(line6pcm, SNDRV_PCM_STREAM_PLAYBACK, LINE6_STREAM_CAPTURE_HELPER); if (err < 0) return err; } err = line6_stream_start(line6pcm, s->stream, LINE6_STREAM_PCM); if (err < 0) return err; break; case SNDRV_PCM_TRIGGER_STOP: case SNDRV_PCM_TRIGGER_SUSPEND: if (s->stream == SNDRV_PCM_STREAM_CAPTURE && (line6pcm->line6->properties->capabilities & LINE6_CAP_IN_NEEDS_OUT)) { line6_stream_stop(line6pcm, SNDRV_PCM_STREAM_PLAYBACK, LINE6_STREAM_CAPTURE_HELPER); } line6_stream_stop(line6pcm, s->stream, LINE6_STREAM_PCM); break; case SNDRV_PCM_TRIGGER_PAUSE_PUSH: if (s->stream != SNDRV_PCM_STREAM_PLAYBACK) return -EINVAL; set_bit(LINE6_FLAG_PAUSE_PLAYBACK, &line6pcm->flags); break; case SNDRV_PCM_TRIGGER_PAUSE_RELEASE: if (s->stream != SNDRV_PCM_STREAM_PLAYBACK) return -EINVAL; clear_bit(LINE6_FLAG_PAUSE_PLAYBACK, &line6pcm->flags); break; default: return -EINVAL; } } return 0; } /* common PCM pointer callback */ snd_pcm_uframes_t snd_line6_pointer(struct snd_pcm_substream *substream) { struct snd_line6_pcm *line6pcm = snd_pcm_substream_chip(substream); struct line6_pcm_stream *pstr = get_stream(line6pcm, substream->stream); return pstr->pos_done; } /* Acquire and optionally start duplex streams: * type is either LINE6_STREAM_IMPULSE or LINE6_STREAM_MONITOR */ int line6_pcm_acquire(struct snd_line6_pcm *line6pcm, int type, bool start) { struct line6_pcm_stream *pstr; int ret = 0, dir; /* TODO: We should assert SNDRV_PCM_STREAM_PLAYBACK/CAPTURE == 0/1 */ mutex_lock(&line6pcm->state_mutex); for (dir = 0; dir < 2; dir++) { pstr = get_stream(line6pcm, dir); ret = line6_buffer_acquire(line6pcm, pstr, dir, type); if (ret < 0) goto error; if (!pstr->running) line6_wait_clear_audio_urbs(line6pcm, pstr); } if (start) { for (dir = 0; dir < 2; dir++) { ret = line6_stream_start(line6pcm, dir, type); if (ret < 0) goto error; } } error: mutex_unlock(&line6pcm->state_mutex); if (ret < 0) line6_pcm_release(line6pcm, type); return ret; } EXPORT_SYMBOL_GPL(line6_pcm_acquire); /* Stop and release duplex streams */ void line6_pcm_release(struct snd_line6_pcm *line6pcm, int type) { struct line6_pcm_stream *pstr; int dir; mutex_lock(&line6pcm->state_mutex); for (dir = 0; dir < 2; dir++) line6_stream_stop(line6pcm, dir, type); for (dir = 0; dir < 2; dir++) { pstr = get_stream(line6pcm, dir); line6_buffer_release(line6pcm, pstr, type); } mutex_unlock(&line6pcm->state_mutex); } EXPORT_SYMBOL_GPL(line6_pcm_release); /* common PCM hw_params callback */ int snd_line6_hw_params(struct snd_pcm_substream *substream, struct snd_pcm_hw_params *hw_params) { int ret; struct snd_line6_pcm *line6pcm = snd_pcm_substream_chip(substream); struct line6_pcm_stream *pstr = get_stream(line6pcm, substream->stream); mutex_lock(&line6pcm->state_mutex); ret = line6_buffer_acquire(line6pcm, pstr, substream->stream, LINE6_STREAM_PCM); if (ret < 0) goto error; pstr->period = params_period_bytes(hw_params); error: mutex_unlock(&line6pcm->state_mutex); return ret; } /* common PCM hw_free callback */ int snd_line6_hw_free(struct snd_pcm_substream *substream) { struct snd_line6_pcm *line6pcm = snd_pcm_substream_chip(substream); struct line6_pcm_stream *pstr = get_stream(line6pcm, substream->stream); mutex_lock(&line6pcm->state_mutex); line6_buffer_release(line6pcm, pstr, LINE6_STREAM_PCM); mutex_unlock(&line6pcm->state_mutex); return 0; } /* control info callback */ static int snd_line6_control_playback_info(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_info *uinfo) { uinfo->type = SNDRV_CTL_ELEM_TYPE_INTEGER; uinfo->count = 2; uinfo->value.integer.min = 0; uinfo->value.integer.max = 256; return 0; } /* control get callback */ static int snd_line6_control_playback_get(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_value *ucontrol) { int i; struct snd_line6_pcm *line6pcm = snd_kcontrol_chip(kcontrol); for (i = 0; i < 2; i++) ucontrol->value.integer.value[i] = line6pcm->volume_playback[i]; return 0; } /* control put callback */ static int snd_line6_control_playback_put(struct snd_kcontrol *kcontrol, struct snd_ctl_elem_value *ucontrol) { int i, changed = 0; struct snd_line6_pcm *line6pcm = snd_kcontrol_chip(kcontrol); for (i = 0; i < 2; i++) if (line6pcm->volume_playback[i] != ucontrol->value.integer.value[i]) { line6pcm->volume_playback[i] = ucontrol->value.integer.value[i]; changed = 1; } return changed; } /* control definition */ static const struct snd_kcontrol_new line6_controls[] = { { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = "PCM Playback Volume", .info = snd_line6_control_playback_info, .get = snd_line6_control_playback_get, .put = snd_line6_control_playback_put }, { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = "Impulse Response Volume", .info = snd_line6_impulse_volume_info, .get = snd_line6_impulse_volume_get, .put = snd_line6_impulse_volume_put }, { .iface = SNDRV_CTL_ELEM_IFACE_MIXER, .name = "Impulse Response Period", .info = snd_line6_impulse_period_info, .get = snd_line6_impulse_period_get, .put = snd_line6_impulse_period_put }, }; /* Cleanup the PCM device. */ static void cleanup_urbs(struct line6_pcm_stream *pcms, int iso_buffers) { int i; /* Most likely impossible in current code... */ if (pcms->urbs == NULL) return; for (i = 0; i < iso_buffers; i++) { if (pcms->urbs[i]) { usb_kill_urb(pcms->urbs[i]); usb_free_urb(pcms->urbs[i]); } } kfree(pcms->urbs); pcms->urbs = NULL; } static void line6_cleanup_pcm(struct snd_pcm *pcm) { struct snd_line6_pcm *line6pcm = snd_pcm_chip(pcm); cleanup_urbs(&line6pcm->out, line6pcm->line6->iso_buffers); cleanup_urbs(&line6pcm->in, line6pcm->line6->iso_buffers); kfree(line6pcm); } /* create a PCM device */ static int snd_line6_new_pcm(struct usb_line6 *line6, struct snd_pcm **pcm_ret) { struct snd_pcm *pcm; int err; err = snd_pcm_new(line6->card, (char *)line6->properties->name, 0, 1, 1, pcm_ret); if (err < 0) return err; pcm = *pcm_ret; strcpy(pcm->name, line6->properties->name); /* set operators */ snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_PLAYBACK, &snd_line6_playback_ops); snd_pcm_set_ops(pcm, SNDRV_PCM_STREAM_CAPTURE, &snd_line6_capture_ops); /* pre-allocation of buffers */ snd_pcm_set_managed_buffer_all(pcm, SNDRV_DMA_TYPE_CONTINUOUS, NULL, 64 * 1024, 128 * 1024); return 0; } /* Sync with PCM stream stops. */ void line6_pcm_disconnect(struct snd_line6_pcm *line6pcm) { line6_unlink_audio_urbs(line6pcm, &line6pcm->out); line6_unlink_audio_urbs(line6pcm, &line6pcm->in); line6_wait_clear_audio_urbs(line6pcm, &line6pcm->out); line6_wait_clear_audio_urbs(line6pcm, &line6pcm->in); } /* Create and register the PCM device and mixer entries. Create URBs for playback and capture. */ int line6_init_pcm(struct usb_line6 *line6, struct line6_pcm_properties *properties) { int i, err; unsigned ep_read = line6->properties->ep_audio_r; unsigned ep_write = line6->properties->ep_audio_w; struct snd_pcm *pcm; struct snd_line6_pcm *line6pcm; if (!(line6->properties->capabilities & LINE6_CAP_PCM)) return 0; /* skip PCM initialization and report success */ err = snd_line6_new_pcm(line6, &pcm); if (err < 0) return err; line6pcm = kzalloc(sizeof(*line6pcm), GFP_KERNEL); if (!line6pcm) return -ENOMEM; mutex_init(&line6pcm->state_mutex); line6pcm->pcm = pcm; line6pcm->properties = properties; line6pcm->volume_playback[0] = line6pcm->volume_playback[1] = 255; line6pcm->volume_monitor = 255; line6pcm->line6 = line6; spin_lock_init(&line6pcm->out.lock); spin_lock_init(&line6pcm->in.lock); line6pcm->impulse_period = LINE6_IMPULSE_DEFAULT_PERIOD; line6->line6pcm = line6pcm; pcm->private_data = line6pcm; pcm->private_free = line6_cleanup_pcm; line6pcm->max_packet_size_in = usb_maxpacket(line6->usbdev, usb_rcvisocpipe(line6->usbdev, ep_read)); line6pcm->max_packet_size_out = usb_maxpacket(line6->usbdev, usb_sndisocpipe(line6->usbdev, ep_write)); if (!line6pcm->max_packet_size_in || !line6pcm->max_packet_size_out) { dev_err(line6pcm->line6->ifcdev, "cannot get proper max packet size\n"); return -EINVAL; } err = line6_create_audio_out_urbs(line6pcm); if (err < 0) return err; err = line6_create_audio_in_urbs(line6pcm); if (err < 0) return err; /* mixer: */ for (i = 0; i < ARRAY_SIZE(line6_controls); i++) { err = snd_ctl_add(line6->card, snd_ctl_new1(&line6_controls[i], line6pcm)); if (err < 0) return err; } return 0; } EXPORT_SYMBOL_GPL(line6_init_pcm); /* prepare pcm callback */ int snd_line6_prepare(struct snd_pcm_substream *substream) { struct snd_line6_pcm *line6pcm = snd_pcm_substream_chip(substream); struct line6_pcm_stream *pstr = get_stream(line6pcm, substream->stream); mutex_lock(&line6pcm->state_mutex); if (!pstr->running) line6_wait_clear_audio_urbs(line6pcm, pstr); if (!test_and_set_bit(LINE6_FLAG_PREPARED, &line6pcm->flags)) { line6pcm->out.count = 0; line6pcm->out.pos = 0; line6pcm->out.pos_done = 0; line6pcm->out.bytes = 0; line6pcm->in.count = 0; line6pcm->in.pos_done = 0; line6pcm->in.bytes = 0; } mutex_unlock(&line6pcm->state_mutex); return 0; } |
12 2 6 6 11 5 5 5 5 5 5 6 5 4 4 5 91 89 19 12 7 12 81 81 34 74 73 77 1 76 7 1 77 76 74 32 70 77 13 13 13 11 10 1 26 53 3 3 50 48 5 5 55 6 4 2 72 72 12 72 65 64 64 63 1 63 65 65 63 60 59 10 8 9 1 55 65 8 63 23 62 55 56 40 9 8 3 8 8 7 6 6 4 3 1 9 1 1 2 1 2 9 1 9 9 9 8 9 2 34 29 29 26 25 24 32 6 6 4 4 4 6 9 9 9 9 9 5 12 12 2 12 11 11 10 2 10 10 10 10 10 8 6 4 4 4 2 1 4 4 1 1 10 10 5 5 12 12 6 6 5 6 4 4 4 4 6 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 | // SPDX-License-Identifier: GPL-2.0-only /* * sysctl.c: General linux system control interface * * Begun 24 March 1995, Stephen Tweedie * Added /proc support, Dec 1995 * Added bdflush entry and intvec min/max checking, 2/23/96, Tom Dyas. * Added hooks for /proc/sys/net (minor, minor patch), 96/4/1, Mike Shaver. * Added kernel/java-{interpreter,appletviewer}, 96/5/10, Mike Shaver. * Dynamic registration fixes, Stephen Tweedie. * Added kswapd-interval, ctrl-alt-del, printk stuff, 1/8/97, Chris Horn. * Made sysctl support optional via CONFIG_SYSCTL, 1/10/97, Chris * Horn. * Added proc_doulongvec_ms_jiffies_minmax, 09/08/99, Carlos H. Bauer. * Added proc_doulongvec_minmax, 09/08/99, Carlos H. Bauer. * Changed linked lists to use list.h instead of lists.h, 02/24/00, Bill * Wendling. * The list_for_each() macro wasn't appropriate for the sysctl loop. * Removed it and replaced it with older style, 03/23/00, Bill Wendling */ #include <linux/module.h> #include <linux/sysctl.h> #include <linux/bitmap.h> #include <linux/signal.h> #include <linux/panic.h> #include <linux/printk.h> #include <linux/proc_fs.h> #include <linux/security.h> #include <linux/ctype.h> #include <linux/filter.h> #include <linux/fs.h> #include <linux/init.h> #include <linux/kernel.h> #include <linux/kobject.h> #include <linux/net.h> #include <linux/sysrq.h> #include <linux/highuid.h> #include <linux/writeback.h> #include <linux/ratelimit.h> #include <linux/initrd.h> #include <linux/key.h> #include <linux/times.h> #include <linux/limits.h> #include <linux/syscalls.h> #include <linux/nfs_fs.h> #include <linux/acpi.h> #include <linux/reboot.h> #include <linux/ftrace.h> #include <linux/kmod.h> #include <linux/capability.h> #include <linux/binfmts.h> #include <linux/sched/sysctl.h> #include <linux/mount.h> #include <linux/pid.h> #include "../lib/kstrtox.h" #include <linux/uaccess.h> #include <asm/processor.h> #ifdef CONFIG_X86 #include <asm/nmi.h> #include <asm/stacktrace.h> #include <asm/io.h> #endif #ifdef CONFIG_SPARC #include <asm/setup.h> #endif #ifdef CONFIG_RT_MUTEXES #include <linux/rtmutex.h> #endif /* shared constants to be used in various sysctls */ const int sysctl_vals[] = { 0, 1, 2, 3, 4, 100, 200, 1000, 3000, INT_MAX, 65535, -1 }; EXPORT_SYMBOL(sysctl_vals); const unsigned long sysctl_long_vals[] = { 0, 1, LONG_MAX }; EXPORT_SYMBOL_GPL(sysctl_long_vals); #if defined(CONFIG_SYSCTL) /* Constants used for minimum and maximum */ static const int ngroups_max = NGROUPS_MAX; static const int cap_last_cap = CAP_LAST_CAP; #ifdef CONFIG_PROC_SYSCTL /** * enum sysctl_writes_mode - supported sysctl write modes * * @SYSCTL_WRITES_LEGACY: each write syscall must fully contain the sysctl value * to be written, and multiple writes on the same sysctl file descriptor * will rewrite the sysctl value, regardless of file position. No warning * is issued when the initial position is not 0. * @SYSCTL_WRITES_WARN: same as above but warn when the initial file position is * not 0. * @SYSCTL_WRITES_STRICT: writes to numeric sysctl entries must always be at * file position 0 and the value must be fully contained in the buffer * sent to the write syscall. If dealing with strings respect the file * position, but restrict this to the max length of the buffer, anything * passed the max length will be ignored. Multiple writes will append * to the buffer. * * These write modes control how current file position affects the behavior of * updating sysctl values through the proc interface on each write. */ enum sysctl_writes_mode { SYSCTL_WRITES_LEGACY = -1, SYSCTL_WRITES_WARN = 0, SYSCTL_WRITES_STRICT = 1, }; static enum sysctl_writes_mode sysctl_writes_strict = SYSCTL_WRITES_STRICT; #endif /* CONFIG_PROC_SYSCTL */ #endif /* CONFIG_SYSCTL */ /* * /proc/sys support */ #ifdef CONFIG_PROC_SYSCTL static int _proc_do_string(char *data, int maxlen, int write, char *buffer, size_t *lenp, loff_t *ppos) { size_t len; char c, *p; if (!data || !maxlen || !*lenp) { *lenp = 0; return 0; } if (write) { if (sysctl_writes_strict == SYSCTL_WRITES_STRICT) { /* Only continue writes not past the end of buffer. */ len = strlen(data); if (len > maxlen - 1) len = maxlen - 1; if (*ppos > len) return 0; len = *ppos; } else { /* Start writing from beginning of buffer. */ len = 0; } *ppos += *lenp; p = buffer; while ((p - buffer) < *lenp && len < maxlen - 1) { c = *(p++); if (c == 0 || c == '\n') break; data[len++] = c; } data[len] = 0; } else { len = strlen(data); if (len > maxlen) len = maxlen; if (*ppos > len) { *lenp = 0; return 0; } data += *ppos; len -= *ppos; if (len > *lenp) len = *lenp; if (len) memcpy(buffer, data, len); if (len < *lenp) { buffer[len] = '\n'; len++; } *lenp = len; *ppos += len; } return 0; } static void warn_sysctl_write(const struct ctl_table *table) { pr_warn_once("%s wrote to %s when file position was not 0!\n" "This will not be supported in the future. To silence this\n" "warning, set kernel.sysctl_writes_strict = -1\n", current->comm, table->procname); } /** * proc_first_pos_non_zero_ignore - check if first position is allowed * @ppos: file position * @table: the sysctl table * * Returns true if the first position is non-zero and the sysctl_writes_strict * mode indicates this is not allowed for numeric input types. String proc * handlers can ignore the return value. */ static bool proc_first_pos_non_zero_ignore(loff_t *ppos, const struct ctl_table *table) { if (!*ppos) return false; switch (sysctl_writes_strict) { case SYSCTL_WRITES_STRICT: return true; case SYSCTL_WRITES_WARN: warn_sysctl_write(table); return false; default: return false; } } /** * proc_dostring - read a string sysctl * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes a string from/to the user buffer. If the kernel * buffer provided is not large enough to hold the string, the * string is truncated. The copied string is %NULL-terminated. * If the string is being read by the user process, it is copied * and a newline '\n' is added. It is truncated if the buffer is * not large enough. * * Returns 0 on success. */ int proc_dostring(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { if (write) proc_first_pos_non_zero_ignore(ppos, table); return _proc_do_string(table->data, table->maxlen, write, buffer, lenp, ppos); } static void proc_skip_spaces(char **buf, size_t *size) { while (*size) { if (!isspace(**buf)) break; (*size)--; (*buf)++; } } static void proc_skip_char(char **buf, size_t *size, const char v) { while (*size) { if (**buf != v) break; (*size)--; (*buf)++; } } /** * strtoul_lenient - parse an ASCII formatted integer from a buffer and only * fail on overflow * * @cp: kernel buffer containing the string to parse * @endp: pointer to store the trailing characters * @base: the base to use * @res: where the parsed integer will be stored * * In case of success 0 is returned and @res will contain the parsed integer, * @endp will hold any trailing characters. * This function will fail the parse on overflow. If there wasn't an overflow * the function will defer the decision what characters count as invalid to the * caller. */ static int strtoul_lenient(const char *cp, char **endp, unsigned int base, unsigned long *res) { unsigned long long result; unsigned int rv; cp = _parse_integer_fixup_radix(cp, &base); rv = _parse_integer(cp, base, &result); if ((rv & KSTRTOX_OVERFLOW) || (result != (unsigned long)result)) return -ERANGE; cp += rv; if (endp) *endp = (char *)cp; *res = (unsigned long)result; return 0; } #define TMPBUFLEN 22 /** * proc_get_long - reads an ASCII formatted integer from a user buffer * * @buf: a kernel buffer * @size: size of the kernel buffer * @val: this is where the number will be stored * @neg: set to %TRUE if number is negative * @perm_tr: a vector which contains the allowed trailers * @perm_tr_len: size of the perm_tr vector * @tr: pointer to store the trailer character * * In case of success %0 is returned and @buf and @size are updated with * the amount of bytes read. If @tr is non-NULL and a trailing * character exists (size is non-zero after returning from this * function), @tr is updated with the trailing character. */ static int proc_get_long(char **buf, size_t *size, unsigned long *val, bool *neg, const char *perm_tr, unsigned perm_tr_len, char *tr) { char *p, tmp[TMPBUFLEN]; ssize_t len = *size; if (len <= 0) return -EINVAL; if (len > TMPBUFLEN - 1) len = TMPBUFLEN - 1; memcpy(tmp, *buf, len); tmp[len] = 0; p = tmp; if (*p == '-' && *size > 1) { *neg = true; p++; } else *neg = false; if (!isdigit(*p)) return -EINVAL; if (strtoul_lenient(p, &p, 0, val)) return -EINVAL; len = p - tmp; /* We don't know if the next char is whitespace thus we may accept * invalid integers (e.g. 1234...a) or two integers instead of one * (e.g. 123...1). So lets not allow such large numbers. */ if (len == TMPBUFLEN - 1) return -EINVAL; if (len < *size && perm_tr_len && !memchr(perm_tr, *p, perm_tr_len)) return -EINVAL; if (tr && (len < *size)) *tr = *p; *buf += len; *size -= len; return 0; } /** * proc_put_long - converts an integer to a decimal ASCII formatted string * * @buf: the user buffer * @size: the size of the user buffer * @val: the integer to be converted * @neg: sign of the number, %TRUE for negative * * In case of success @buf and @size are updated with the amount of bytes * written. */ static void proc_put_long(void **buf, size_t *size, unsigned long val, bool neg) { int len; char tmp[TMPBUFLEN], *p = tmp; sprintf(p, "%s%lu", neg ? "-" : "", val); len = strlen(tmp); if (len > *size) len = *size; memcpy(*buf, tmp, len); *size -= len; *buf += len; } #undef TMPBUFLEN static void proc_put_char(void **buf, size_t *size, char c) { if (*size) { char **buffer = (char **)buf; **buffer = c; (*size)--; (*buffer)++; *buf = *buffer; } } static int do_proc_dointvec_conv(bool *negp, unsigned long *lvalp, int *valp, int write, void *data) { if (write) { if (*negp) { if (*lvalp > (unsigned long) INT_MAX + 1) return -EINVAL; WRITE_ONCE(*valp, -*lvalp); } else { if (*lvalp > (unsigned long) INT_MAX) return -EINVAL; WRITE_ONCE(*valp, *lvalp); } } else { int val = READ_ONCE(*valp); if (val < 0) { *negp = true; *lvalp = -(unsigned long)val; } else { *negp = false; *lvalp = (unsigned long)val; } } return 0; } static int do_proc_douintvec_conv(unsigned long *lvalp, unsigned int *valp, int write, void *data) { if (write) { if (*lvalp > UINT_MAX) return -EINVAL; WRITE_ONCE(*valp, *lvalp); } else { unsigned int val = READ_ONCE(*valp); *lvalp = (unsigned long)val; } return 0; } static const char proc_wspace_sep[] = { ' ', '\t', '\n' }; static int __do_proc_dointvec(void *tbl_data, const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos, int (*conv)(bool *negp, unsigned long *lvalp, int *valp, int write, void *data), void *data) { int *i, vleft, first = 1, err = 0; size_t left; char *p; if (!tbl_data || !table->maxlen || !*lenp || (*ppos && !write)) { *lenp = 0; return 0; } i = (int *) tbl_data; vleft = table->maxlen / sizeof(*i); left = *lenp; if (!conv) conv = do_proc_dointvec_conv; if (write) { if (proc_first_pos_non_zero_ignore(ppos, table)) goto out; if (left > PAGE_SIZE - 1) left = PAGE_SIZE - 1; p = buffer; } for (; left && vleft--; i++, first=0) { unsigned long lval; bool neg; if (write) { proc_skip_spaces(&p, &left); if (!left) break; err = proc_get_long(&p, &left, &lval, &neg, proc_wspace_sep, sizeof(proc_wspace_sep), NULL); if (err) break; if (conv(&neg, &lval, i, 1, data)) { err = -EINVAL; break; } } else { if (conv(&neg, &lval, i, 0, data)) { err = -EINVAL; break; } if (!first) proc_put_char(&buffer, &left, '\t'); proc_put_long(&buffer, &left, lval, neg); } } if (!write && !first && left && !err) proc_put_char(&buffer, &left, '\n'); if (write && !err && left) proc_skip_spaces(&p, &left); if (write && first) return err ? : -EINVAL; *lenp -= left; out: *ppos += *lenp; return err; } static int do_proc_dointvec(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos, int (*conv)(bool *negp, unsigned long *lvalp, int *valp, int write, void *data), void *data) { return __do_proc_dointvec(table->data, table, write, buffer, lenp, ppos, conv, data); } static int do_proc_douintvec_w(unsigned int *tbl_data, const struct ctl_table *table, void *buffer, size_t *lenp, loff_t *ppos, int (*conv)(unsigned long *lvalp, unsigned int *valp, int write, void *data), void *data) { unsigned long lval; int err = 0; size_t left; bool neg; char *p = buffer; left = *lenp; if (proc_first_pos_non_zero_ignore(ppos, table)) goto bail_early; if (left > PAGE_SIZE - 1) left = PAGE_SIZE - 1; proc_skip_spaces(&p, &left); if (!left) { err = -EINVAL; goto out_free; } err = proc_get_long(&p, &left, &lval, &neg, proc_wspace_sep, sizeof(proc_wspace_sep), NULL); if (err || neg) { err = -EINVAL; goto out_free; } if (conv(&lval, tbl_data, 1, data)) { err = -EINVAL; goto out_free; } if (!err && left) proc_skip_spaces(&p, &left); out_free: if (err) return -EINVAL; return 0; /* This is in keeping with old __do_proc_dointvec() */ bail_early: *ppos += *lenp; return err; } static int do_proc_douintvec_r(unsigned int *tbl_data, void *buffer, size_t *lenp, loff_t *ppos, int (*conv)(unsigned long *lvalp, unsigned int *valp, int write, void *data), void *data) { unsigned long lval; int err = 0; size_t left; left = *lenp; if (conv(&lval, tbl_data, 0, data)) { err = -EINVAL; goto out; } proc_put_long(&buffer, &left, lval, false); if (!left) goto out; proc_put_char(&buffer, &left, '\n'); out: *lenp -= left; *ppos += *lenp; return err; } static int __do_proc_douintvec(void *tbl_data, const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos, int (*conv)(unsigned long *lvalp, unsigned int *valp, int write, void *data), void *data) { unsigned int *i, vleft; if (!tbl_data || !table->maxlen || !*lenp || (*ppos && !write)) { *lenp = 0; return 0; } i = (unsigned int *) tbl_data; vleft = table->maxlen / sizeof(*i); /* * Arrays are not supported, keep this simple. *Do not* add * support for them. */ if (vleft != 1) { *lenp = 0; return -EINVAL; } if (!conv) conv = do_proc_douintvec_conv; if (write) return do_proc_douintvec_w(i, table, buffer, lenp, ppos, conv, data); return do_proc_douintvec_r(i, buffer, lenp, ppos, conv, data); } int do_proc_douintvec(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos, int (*conv)(unsigned long *lvalp, unsigned int *valp, int write, void *data), void *data) { return __do_proc_douintvec(table->data, table, write, buffer, lenp, ppos, conv, data); } /** * proc_dobool - read/write a bool * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes one integer value from/to the user buffer, * treated as an ASCII string. * * table->data must point to a bool variable and table->maxlen must * be sizeof(bool). * * Returns 0 on success. */ int proc_dobool(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table tmp; bool *data = table->data; int res, val; /* Do not support arrays yet. */ if (table->maxlen != sizeof(bool)) return -EINVAL; tmp = *table; tmp.maxlen = sizeof(val); tmp.data = &val; val = READ_ONCE(*data); res = proc_dointvec(&tmp, write, buffer, lenp, ppos); if (res) return res; if (write) WRITE_ONCE(*data, val); return 0; } /** * proc_dointvec - read a vector of integers * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes up to table->maxlen/sizeof(unsigned int) integer * values from/to the user buffer, treated as an ASCII string. * * Returns 0 on success. */ int proc_dointvec(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return do_proc_dointvec(table, write, buffer, lenp, ppos, NULL, NULL); } /** * proc_douintvec - read a vector of unsigned integers * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes up to table->maxlen/sizeof(unsigned int) unsigned integer * values from/to the user buffer, treated as an ASCII string. * * Returns 0 on success. */ int proc_douintvec(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return do_proc_douintvec(table, write, buffer, lenp, ppos, do_proc_douintvec_conv, NULL); } /* * Taint values can only be increased * This means we can safely use a temporary. */ static int proc_taint(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table t; unsigned long tmptaint = get_taint(); int err; if (write && !capable(CAP_SYS_ADMIN)) return -EPERM; t = *table; t.data = &tmptaint; err = proc_doulongvec_minmax(&t, write, buffer, lenp, ppos); if (err < 0) return err; if (write) { int i; /* * If we are relying on panic_on_taint not producing * false positives due to userspace input, bail out * before setting the requested taint flags. */ if (panic_on_taint_nousertaint && (tmptaint & panic_on_taint)) return -EINVAL; /* * Poor man's atomic or. Not worth adding a primitive * to everyone's atomic.h for this */ for (i = 0; i < TAINT_FLAGS_COUNT; i++) if ((1UL << i) & tmptaint) add_taint(i, LOCKDEP_STILL_OK); } return err; } /** * struct do_proc_dointvec_minmax_conv_param - proc_dointvec_minmax() range checking structure * @min: pointer to minimum allowable value * @max: pointer to maximum allowable value * * The do_proc_dointvec_minmax_conv_param structure provides the * minimum and maximum values for doing range checking for those sysctl * parameters that use the proc_dointvec_minmax() handler. */ struct do_proc_dointvec_minmax_conv_param { int *min; int *max; }; static int do_proc_dointvec_minmax_conv(bool *negp, unsigned long *lvalp, int *valp, int write, void *data) { int tmp, ret; struct do_proc_dointvec_minmax_conv_param *param = data; /* * If writing, first do so via a temporary local int so we can * bounds-check it before touching *valp. */ int *ip = write ? &tmp : valp; ret = do_proc_dointvec_conv(negp, lvalp, ip, write, data); if (ret) return ret; if (write) { if ((param->min && *param->min > tmp) || (param->max && *param->max < tmp)) return -EINVAL; WRITE_ONCE(*valp, tmp); } return 0; } /** * proc_dointvec_minmax - read a vector of integers with min/max values * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes up to table->maxlen/sizeof(unsigned int) integer * values from/to the user buffer, treated as an ASCII string. * * This routine will ensure the values are within the range specified by * table->extra1 (min) and table->extra2 (max). * * Returns 0 on success or -EINVAL on write when the range check fails. */ int proc_dointvec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct do_proc_dointvec_minmax_conv_param param = { .min = (int *) table->extra1, .max = (int *) table->extra2, }; return do_proc_dointvec(table, write, buffer, lenp, ppos, do_proc_dointvec_minmax_conv, ¶m); } /** * struct do_proc_douintvec_minmax_conv_param - proc_douintvec_minmax() range checking structure * @min: pointer to minimum allowable value * @max: pointer to maximum allowable value * * The do_proc_douintvec_minmax_conv_param structure provides the * minimum and maximum values for doing range checking for those sysctl * parameters that use the proc_douintvec_minmax() handler. */ struct do_proc_douintvec_minmax_conv_param { unsigned int *min; unsigned int *max; }; static int do_proc_douintvec_minmax_conv(unsigned long *lvalp, unsigned int *valp, int write, void *data) { int ret; unsigned int tmp; struct do_proc_douintvec_minmax_conv_param *param = data; /* write via temporary local uint for bounds-checking */ unsigned int *up = write ? &tmp : valp; ret = do_proc_douintvec_conv(lvalp, up, write, data); if (ret) return ret; if (write) { if ((param->min && *param->min > tmp) || (param->max && *param->max < tmp)) return -ERANGE; WRITE_ONCE(*valp, tmp); } return 0; } /** * proc_douintvec_minmax - read a vector of unsigned ints with min/max values * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes up to table->maxlen/sizeof(unsigned int) unsigned integer * values from/to the user buffer, treated as an ASCII string. Negative * strings are not allowed. * * This routine will ensure the values are within the range specified by * table->extra1 (min) and table->extra2 (max). There is a final sanity * check for UINT_MAX to avoid having to support wrap around uses from * userspace. * * Returns 0 on success or -ERANGE on write when the range check fails. */ int proc_douintvec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct do_proc_douintvec_minmax_conv_param param = { .min = (unsigned int *) table->extra1, .max = (unsigned int *) table->extra2, }; return do_proc_douintvec(table, write, buffer, lenp, ppos, do_proc_douintvec_minmax_conv, ¶m); } /** * proc_dou8vec_minmax - read a vector of unsigned chars with min/max values * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes up to table->maxlen/sizeof(u8) unsigned chars * values from/to the user buffer, treated as an ASCII string. Negative * strings are not allowed. * * This routine will ensure the values are within the range specified by * table->extra1 (min) and table->extra2 (max). * * Returns 0 on success or an error on write when the range check fails. */ int proc_dou8vec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct ctl_table tmp; unsigned int min = 0, max = 255U, val; u8 *data = table->data; struct do_proc_douintvec_minmax_conv_param param = { .min = &min, .max = &max, }; int res; /* Do not support arrays yet. */ if (table->maxlen != sizeof(u8)) return -EINVAL; if (table->extra1) min = *(unsigned int *) table->extra1; if (table->extra2) max = *(unsigned int *) table->extra2; tmp = *table; tmp.maxlen = sizeof(val); tmp.data = &val; val = READ_ONCE(*data); res = do_proc_douintvec(&tmp, write, buffer, lenp, ppos, do_proc_douintvec_minmax_conv, ¶m); if (res) return res; if (write) WRITE_ONCE(*data, val); return 0; } EXPORT_SYMBOL_GPL(proc_dou8vec_minmax); #ifdef CONFIG_MAGIC_SYSRQ static int sysrq_sysctl_handler(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { int tmp, ret; tmp = sysrq_mask(); ret = __do_proc_dointvec(&tmp, table, write, buffer, lenp, ppos, NULL, NULL); if (ret || !write) return ret; if (write) sysrq_toggle_support(tmp); return 0; } #endif static int __do_proc_doulongvec_minmax(void *data, const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos, unsigned long convmul, unsigned long convdiv) { unsigned long *i, *min, *max; int vleft, first = 1, err = 0; size_t left; char *p; if (!data || !table->maxlen || !*lenp || (*ppos && !write)) { *lenp = 0; return 0; } i = data; min = table->extra1; max = table->extra2; vleft = table->maxlen / sizeof(unsigned long); left = *lenp; if (write) { if (proc_first_pos_non_zero_ignore(ppos, table)) goto out; if (left > PAGE_SIZE - 1) left = PAGE_SIZE - 1; p = buffer; } for (; left && vleft--; i++, first = 0) { unsigned long val; if (write) { bool neg; proc_skip_spaces(&p, &left); if (!left) break; err = proc_get_long(&p, &left, &val, &neg, proc_wspace_sep, sizeof(proc_wspace_sep), NULL); if (err || neg) { err = -EINVAL; break; } val = convmul * val / convdiv; if ((min && val < *min) || (max && val > *max)) { err = -EINVAL; break; } WRITE_ONCE(*i, val); } else { val = convdiv * READ_ONCE(*i) / convmul; if (!first) proc_put_char(&buffer, &left, '\t'); proc_put_long(&buffer, &left, val, false); } } if (!write && !first && left && !err) proc_put_char(&buffer, &left, '\n'); if (write && !err) proc_skip_spaces(&p, &left); if (write && first) return err ? : -EINVAL; *lenp -= left; out: *ppos += *lenp; return err; } static int do_proc_doulongvec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos, unsigned long convmul, unsigned long convdiv) { return __do_proc_doulongvec_minmax(table->data, table, write, buffer, lenp, ppos, convmul, convdiv); } /** * proc_doulongvec_minmax - read a vector of long integers with min/max values * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes up to table->maxlen/sizeof(unsigned long) unsigned long * values from/to the user buffer, treated as an ASCII string. * * This routine will ensure the values are within the range specified by * table->extra1 (min) and table->extra2 (max). * * Returns 0 on success. */ int proc_doulongvec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return do_proc_doulongvec_minmax(table, write, buffer, lenp, ppos, 1l, 1l); } /** * proc_doulongvec_ms_jiffies_minmax - read a vector of millisecond values with min/max values * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes up to table->maxlen/sizeof(unsigned long) unsigned long * values from/to the user buffer, treated as an ASCII string. The values * are treated as milliseconds, and converted to jiffies when they are stored. * * This routine will ensure the values are within the range specified by * table->extra1 (min) and table->extra2 (max). * * Returns 0 on success. */ int proc_doulongvec_ms_jiffies_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return do_proc_doulongvec_minmax(table, write, buffer, lenp, ppos, HZ, 1000l); } static int do_proc_dointvec_jiffies_conv(bool *negp, unsigned long *lvalp, int *valp, int write, void *data) { if (write) { if (*lvalp > INT_MAX / HZ) return 1; if (*negp) WRITE_ONCE(*valp, -*lvalp * HZ); else WRITE_ONCE(*valp, *lvalp * HZ); } else { int val = READ_ONCE(*valp); unsigned long lval; if (val < 0) { *negp = true; lval = -(unsigned long)val; } else { *negp = false; lval = (unsigned long)val; } *lvalp = lval / HZ; } return 0; } static int do_proc_dointvec_userhz_jiffies_conv(bool *negp, unsigned long *lvalp, int *valp, int write, void *data) { if (write) { if (USER_HZ < HZ && *lvalp > (LONG_MAX / HZ) * USER_HZ) return 1; *valp = clock_t_to_jiffies(*negp ? -*lvalp : *lvalp); } else { int val = *valp; unsigned long lval; if (val < 0) { *negp = true; lval = -(unsigned long)val; } else { *negp = false; lval = (unsigned long)val; } *lvalp = jiffies_to_clock_t(lval); } return 0; } static int do_proc_dointvec_ms_jiffies_conv(bool *negp, unsigned long *lvalp, int *valp, int write, void *data) { if (write) { unsigned long jif = msecs_to_jiffies(*negp ? -*lvalp : *lvalp); if (jif > INT_MAX) return 1; WRITE_ONCE(*valp, (int)jif); } else { int val = READ_ONCE(*valp); unsigned long lval; if (val < 0) { *negp = true; lval = -(unsigned long)val; } else { *negp = false; lval = (unsigned long)val; } *lvalp = jiffies_to_msecs(lval); } return 0; } static int do_proc_dointvec_ms_jiffies_minmax_conv(bool *negp, unsigned long *lvalp, int *valp, int write, void *data) { int tmp, ret; struct do_proc_dointvec_minmax_conv_param *param = data; /* * If writing, first do so via a temporary local int so we can * bounds-check it before touching *valp. */ int *ip = write ? &tmp : valp; ret = do_proc_dointvec_ms_jiffies_conv(negp, lvalp, ip, write, data); if (ret) return ret; if (write) { if ((param->min && *param->min > tmp) || (param->max && *param->max < tmp)) return -EINVAL; *valp = tmp; } return 0; } /** * proc_dointvec_jiffies - read a vector of integers as seconds * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * Reads/writes up to table->maxlen/sizeof(unsigned int) integer * values from/to the user buffer, treated as an ASCII string. * The values read are assumed to be in seconds, and are converted into * jiffies. * * Returns 0 on success. */ int proc_dointvec_jiffies(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return do_proc_dointvec(table,write,buffer,lenp,ppos, do_proc_dointvec_jiffies_conv,NULL); } int proc_dointvec_ms_jiffies_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct do_proc_dointvec_minmax_conv_param param = { .min = (int *) table->extra1, .max = (int *) table->extra2, }; return do_proc_dointvec(table, write, buffer, lenp, ppos, do_proc_dointvec_ms_jiffies_minmax_conv, ¶m); } /** * proc_dointvec_userhz_jiffies - read a vector of integers as 1/USER_HZ seconds * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: pointer to the file position * * Reads/writes up to table->maxlen/sizeof(unsigned int) integer * values from/to the user buffer, treated as an ASCII string. * The values read are assumed to be in 1/USER_HZ seconds, and * are converted into jiffies. * * Returns 0 on success. */ int proc_dointvec_userhz_jiffies(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return do_proc_dointvec(table, write, buffer, lenp, ppos, do_proc_dointvec_userhz_jiffies_conv, NULL); } /** * proc_dointvec_ms_jiffies - read a vector of integers as 1 milliseconds * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: the current position in the file * * Reads/writes up to table->maxlen/sizeof(unsigned int) integer * values from/to the user buffer, treated as an ASCII string. * The values read are assumed to be in 1/1000 seconds, and * are converted into jiffies. * * Returns 0 on success. */ int proc_dointvec_ms_jiffies(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return do_proc_dointvec(table, write, buffer, lenp, ppos, do_proc_dointvec_ms_jiffies_conv, NULL); } static int proc_do_cad_pid(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct pid *new_pid; pid_t tmp; int r; tmp = pid_vnr(cad_pid); r = __do_proc_dointvec(&tmp, table, write, buffer, lenp, ppos, NULL, NULL); if (r || !write) return r; new_pid = find_get_pid(tmp); if (!new_pid) return -ESRCH; put_pid(xchg(&cad_pid, new_pid)); return 0; } /** * proc_do_large_bitmap - read/write from/to a large bitmap * @table: the sysctl table * @write: %TRUE if this is a write to the sysctl file * @buffer: the user buffer * @lenp: the size of the user buffer * @ppos: file position * * The bitmap is stored at table->data and the bitmap length (in bits) * in table->maxlen. * * We use a range comma separated format (e.g. 1,3-4,10-10) so that * large bitmaps may be represented in a compact manner. Writing into * the file will clear the bitmap then update it with the given input. * * Returns 0 on success. */ int proc_do_large_bitmap(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { int err = 0; size_t left = *lenp; unsigned long bitmap_len = table->maxlen; unsigned long *bitmap = *(unsigned long **) table->data; unsigned long *tmp_bitmap = NULL; char tr_a[] = { '-', ',', '\n' }, tr_b[] = { ',', '\n', 0 }, c; if (!bitmap || !bitmap_len || !left || (*ppos && !write)) { *lenp = 0; return 0; } if (write) { char *p = buffer; size_t skipped = 0; if (left > PAGE_SIZE - 1) { left = PAGE_SIZE - 1; /* How much of the buffer we'll skip this pass */ skipped = *lenp - left; } tmp_bitmap = bitmap_zalloc(bitmap_len, GFP_KERNEL); if (!tmp_bitmap) return -ENOMEM; proc_skip_char(&p, &left, '\n'); while (!err && left) { unsigned long val_a, val_b; bool neg; size_t saved_left; /* In case we stop parsing mid-number, we can reset */ saved_left = left; err = proc_get_long(&p, &left, &val_a, &neg, tr_a, sizeof(tr_a), &c); /* * If we consumed the entirety of a truncated buffer or * only one char is left (may be a "-"), then stop here, * reset, & come back for more. */ if ((left <= 1) && skipped) { left = saved_left; break; } if (err) break; if (val_a >= bitmap_len || neg) { err = -EINVAL; break; } val_b = val_a; if (left) { p++; left--; } if (c == '-') { err = proc_get_long(&p, &left, &val_b, &neg, tr_b, sizeof(tr_b), &c); /* * If we consumed all of a truncated buffer or * then stop here, reset, & come back for more. */ if (!left && skipped) { left = saved_left; break; } if (err) break; if (val_b >= bitmap_len || neg || val_a > val_b) { err = -EINVAL; break; } if (left) { p++; left--; } } bitmap_set(tmp_bitmap, val_a, val_b - val_a + 1); proc_skip_char(&p, &left, '\n'); } left += skipped; } else { unsigned long bit_a, bit_b = 0; bool first = 1; while (left) { bit_a = find_next_bit(bitmap, bitmap_len, bit_b); if (bit_a >= bitmap_len) break; bit_b = find_next_zero_bit(bitmap, bitmap_len, bit_a + 1) - 1; if (!first) proc_put_char(&buffer, &left, ','); proc_put_long(&buffer, &left, bit_a, false); if (bit_a != bit_b) { proc_put_char(&buffer, &left, '-'); proc_put_long(&buffer, &left, bit_b, false); } first = 0; bit_b++; } proc_put_char(&buffer, &left, '\n'); } if (!err) { if (write) { if (*ppos) bitmap_or(bitmap, bitmap, tmp_bitmap, bitmap_len); else bitmap_copy(bitmap, tmp_bitmap, bitmap_len); } *lenp -= left; *ppos += *lenp; } bitmap_free(tmp_bitmap); return err; } #else /* CONFIG_PROC_SYSCTL */ int proc_dostring(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_dobool(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_dointvec(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_douintvec(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_dointvec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_douintvec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_dou8vec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_dointvec_jiffies(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_dointvec_ms_jiffies_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_dointvec_userhz_jiffies(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_dointvec_ms_jiffies(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_doulongvec_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_doulongvec_ms_jiffies_minmax(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } int proc_do_large_bitmap(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { return -ENOSYS; } #endif /* CONFIG_PROC_SYSCTL */ #if defined(CONFIG_SYSCTL) int proc_do_static_key(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { struct static_key *key = (struct static_key *)table->data; static DEFINE_MUTEX(static_key_mutex); int val, ret; struct ctl_table tmp = { .data = &val, .maxlen = sizeof(val), .mode = table->mode, .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_ONE, }; if (write && !capable(CAP_SYS_ADMIN)) return -EPERM; mutex_lock(&static_key_mutex); val = static_key_enabled(key); ret = proc_dointvec_minmax(&tmp, write, buffer, lenp, ppos); if (write && !ret) { if (val) static_key_enable(key); else static_key_disable(key); } mutex_unlock(&static_key_mutex); return ret; } static const struct ctl_table kern_table[] = { { .procname = "panic", .data = &panic_timeout, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, #ifdef CONFIG_PROC_SYSCTL { .procname = "tainted", .maxlen = sizeof(long), .mode = 0644, .proc_handler = proc_taint, }, { .procname = "sysctl_writes_strict", .data = &sysctl_writes_strict, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_NEG_ONE, .extra2 = SYSCTL_ONE, }, #endif { .procname = "print-fatal-signals", .data = &print_fatal_signals, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, #ifdef CONFIG_SPARC { .procname = "reboot-cmd", .data = reboot_command, .maxlen = 256, .mode = 0644, .proc_handler = proc_dostring, }, { .procname = "stop-a", .data = &stop_a_enabled, .maxlen = sizeof (int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "scons-poweroff", .data = &scons_pwroff, .maxlen = sizeof (int), .mode = 0644, .proc_handler = proc_dointvec, }, #endif #ifdef CONFIG_SPARC64 { .procname = "tsb-ratio", .data = &sysctl_tsb_ratio, .maxlen = sizeof (int), .mode = 0644, .proc_handler = proc_dointvec, }, #endif #ifdef CONFIG_PARISC { .procname = "soft-power", .data = &pwrsw_enabled, .maxlen = sizeof (int), .mode = 0644, .proc_handler = proc_dointvec, }, #endif #ifdef CONFIG_SYSCTL_ARCH_UNALIGN_ALLOW { .procname = "unaligned-trap", .data = &unaligned_enabled, .maxlen = sizeof (int), .mode = 0644, .proc_handler = proc_dointvec, }, #endif #ifdef CONFIG_STACK_TRACER { .procname = "stack_tracer_enabled", .data = &stack_tracer_enabled, .maxlen = sizeof(int), .mode = 0644, .proc_handler = stack_trace_sysctl, }, #endif #ifdef CONFIG_TRACING { .procname = "ftrace_dump_on_oops", .data = &ftrace_dump_on_oops, .maxlen = MAX_TRACER_SIZE, .mode = 0644, .proc_handler = proc_dostring, }, { .procname = "traceoff_on_warning", .data = &__disable_trace_on_warning, .maxlen = sizeof(__disable_trace_on_warning), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "tracepoint_printk", .data = &tracepoint_printk, .maxlen = sizeof(tracepoint_printk), .mode = 0644, .proc_handler = tracepoint_printk_sysctl, }, #endif #ifdef CONFIG_MODULES { .procname = "modprobe", .data = &modprobe_path, .maxlen = KMOD_PATH_LEN, .mode = 0644, .proc_handler = proc_dostring, }, { .procname = "modules_disabled", .data = &modules_disabled, .maxlen = sizeof(int), .mode = 0644, /* only handle a transition from default "0" to "1" */ .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ONE, .extra2 = SYSCTL_ONE, }, #endif #ifdef CONFIG_UEVENT_HELPER { .procname = "hotplug", .data = &uevent_helper, .maxlen = UEVENT_HELPER_PATH_LEN, .mode = 0644, .proc_handler = proc_dostring, }, #endif #ifdef CONFIG_MAGIC_SYSRQ { .procname = "sysrq", .data = NULL, .maxlen = sizeof (int), .mode = 0644, .proc_handler = sysrq_sysctl_handler, }, #endif #ifdef CONFIG_PROC_SYSCTL { .procname = "cad_pid", .data = NULL, .maxlen = sizeof (int), .mode = 0600, .proc_handler = proc_do_cad_pid, }, #endif { .procname = "threads-max", .data = NULL, .maxlen = sizeof(int), .mode = 0644, .proc_handler = sysctl_max_threads, }, { .procname = "overflowuid", .data = &overflowuid, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_MAXOLDUID, }, { .procname = "overflowgid", .data = &overflowgid, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_MAXOLDUID, }, { .procname = "panic_on_oops", .data = &panic_on_oops, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, { .procname = "panic_print", .data = &panic_print, .maxlen = sizeof(unsigned long), .mode = 0644, .proc_handler = proc_doulongvec_minmax, }, { .procname = "ngroups_max", .data = (void *)&ngroups_max, .maxlen = sizeof (int), .mode = 0444, .proc_handler = proc_dointvec, }, { .procname = "cap_last_cap", .data = (void *)&cap_last_cap, .maxlen = sizeof(int), .mode = 0444, .proc_handler = proc_dointvec, }, #if (defined(CONFIG_X86_32) || defined(CONFIG_PARISC)) && \ defined(CONFIG_DEBUG_STACKOVERFLOW) { .procname = "panic_on_stackoverflow", .data = &sysctl_panic_on_stackoverflow, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, #endif #if defined(CONFIG_MMU) { .procname = "randomize_va_space", .data = &randomize_va_space, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, #endif #ifdef CONFIG_SYSCTL_ARCH_UNALIGN_NO_WARN { .procname = "ignore-unaligned-usertrap", .data = &no_unaligned_warning, .maxlen = sizeof (int), .mode = 0644, .proc_handler = proc_dointvec, }, #endif #ifdef CONFIG_RT_MUTEXES { .procname = "max_lock_depth", .data = &max_lock_depth, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec, }, #endif { .procname = "panic_on_warn", .data = &panic_on_warn, .maxlen = sizeof(int), .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_ONE, }, #ifdef CONFIG_TREE_RCU { .procname = "panic_on_rcu_stall", .data = &sysctl_panic_on_rcu_stall, .maxlen = sizeof(sysctl_panic_on_rcu_stall), .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_ONE, }, { .procname = "max_rcu_stall_to_panic", .data = &sysctl_max_rcu_stall_to_panic, .maxlen = sizeof(sysctl_max_rcu_stall_to_panic), .mode = 0644, .proc_handler = proc_dointvec_minmax, .extra1 = SYSCTL_ONE, .extra2 = SYSCTL_INT_MAX, }, #endif }; int __init sysctl_init_bases(void) { register_sysctl_init("kernel", kern_table); return 0; } #endif /* CONFIG_SYSCTL */ /* * No sense putting this after each symbol definition, twice, * exception granted :-) */ EXPORT_SYMBOL(proc_dobool); EXPORT_SYMBOL(proc_dointvec); EXPORT_SYMBOL(proc_douintvec); EXPORT_SYMBOL(proc_dointvec_jiffies); EXPORT_SYMBOL(proc_dointvec_minmax); EXPORT_SYMBOL_GPL(proc_douintvec_minmax); EXPORT_SYMBOL(proc_dointvec_userhz_jiffies); EXPORT_SYMBOL(proc_dointvec_ms_jiffies); EXPORT_SYMBOL(proc_dostring); EXPORT_SYMBOL(proc_doulongvec_minmax); EXPORT_SYMBOL(proc_doulongvec_ms_jiffies_minmax); EXPORT_SYMBOL(proc_do_large_bitmap); |
9 6 5 5 6 8 4 4 4 3 8 6 8 6 6 8 2 6 6 2 6 4 1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 | // SPDX-License-Identifier: GPL-2.0-only /* * GHASH: hash function for GCM (Galois/Counter Mode). * * Copyright (c) 2007 Nokia Siemens Networks - Mikko Herranen <mh1@iki.fi> * Copyright (c) 2009 Intel Corp. * Author: Huang Ying <ying.huang@intel.com> */ /* * GHASH is a keyed hash function used in GCM authentication tag generation. * * The original GCM paper [1] presents GHASH as a function GHASH(H, A, C) which * takes a 16-byte hash key H, additional authenticated data A, and a ciphertext * C. It formats A and C into a single byte string X, interprets X as a * polynomial over GF(2^128), and evaluates this polynomial at the point H. * * However, the NIST standard for GCM [2] presents GHASH as GHASH(H, X) where X * is the already-formatted byte string containing both A and C. * * "ghash" in the Linux crypto API uses the 'X' (pre-formatted) convention, * since the API supports only a single data stream per hash. Thus, the * formatting of 'A' and 'C' is done in the "gcm" template, not in "ghash". * * The reason "ghash" is separate from "gcm" is to allow "gcm" to use an * accelerated "ghash" when a standalone accelerated "gcm(aes)" is unavailable. * It is generally inappropriate to use "ghash" for other purposes, since it is * an "ε-almost-XOR-universal hash function", not a cryptographic hash function. * It can only be used securely in crypto modes specially designed to use it. * * [1] The Galois/Counter Mode of Operation (GCM) * (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.694.695&rep=rep1&type=pdf) * [2] Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC * (https://csrc.nist.gov/publications/detail/sp/800-38d/final) */ #include <crypto/algapi.h> #include <crypto/gf128mul.h> #include <crypto/ghash.h> #include <crypto/internal/hash.h> #include <linux/crypto.h> #include <linux/init.h> #include <linux/kernel.h> #include <linux/module.h> static int ghash_init(struct shash_desc *desc) { struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); memset(dctx, 0, sizeof(*dctx)); return 0; } static int ghash_setkey(struct crypto_shash *tfm, const u8 *key, unsigned int keylen) { struct ghash_ctx *ctx = crypto_shash_ctx(tfm); be128 k; if (keylen != GHASH_BLOCK_SIZE) return -EINVAL; if (ctx->gf128) gf128mul_free_4k(ctx->gf128); BUILD_BUG_ON(sizeof(k) != GHASH_BLOCK_SIZE); memcpy(&k, key, GHASH_BLOCK_SIZE); /* avoid violating alignment rules */ ctx->gf128 = gf128mul_init_4k_lle(&k); memzero_explicit(&k, GHASH_BLOCK_SIZE); if (!ctx->gf128) return -ENOMEM; return 0; } static int ghash_update(struct shash_desc *desc, const u8 *src, unsigned int srclen) { struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *dst = dctx->buffer; if (dctx->bytes) { int n = min(srclen, dctx->bytes); u8 *pos = dst + (GHASH_BLOCK_SIZE - dctx->bytes); dctx->bytes -= n; srclen -= n; while (n--) *pos++ ^= *src++; if (!dctx->bytes) gf128mul_4k_lle((be128 *)dst, ctx->gf128); } while (srclen >= GHASH_BLOCK_SIZE) { crypto_xor(dst, src, GHASH_BLOCK_SIZE); gf128mul_4k_lle((be128 *)dst, ctx->gf128); src += GHASH_BLOCK_SIZE; srclen -= GHASH_BLOCK_SIZE; } if (srclen) { dctx->bytes = GHASH_BLOCK_SIZE - srclen; while (srclen--) *dst++ ^= *src++; } return 0; } static void ghash_flush(struct ghash_ctx *ctx, struct ghash_desc_ctx *dctx) { u8 *dst = dctx->buffer; if (dctx->bytes) { u8 *tmp = dst + (GHASH_BLOCK_SIZE - dctx->bytes); while (dctx->bytes--) *tmp++ ^= 0; gf128mul_4k_lle((be128 *)dst, ctx->gf128); } dctx->bytes = 0; } static int ghash_final(struct shash_desc *desc, u8 *dst) { struct ghash_desc_ctx *dctx = shash_desc_ctx(desc); struct ghash_ctx *ctx = crypto_shash_ctx(desc->tfm); u8 *buf = dctx->buffer; ghash_flush(ctx, dctx); memcpy(dst, buf, GHASH_BLOCK_SIZE); return 0; } static void ghash_exit_tfm(struct crypto_tfm *tfm) { struct ghash_ctx *ctx = crypto_tfm_ctx(tfm); if (ctx->gf128) gf128mul_free_4k(ctx->gf128); } static struct shash_alg ghash_alg = { .digestsize = GHASH_DIGEST_SIZE, .init = ghash_init, .update = ghash_update, .final = ghash_final, .setkey = ghash_setkey, .descsize = sizeof(struct ghash_desc_ctx), .base = { .cra_name = "ghash", .cra_driver_name = "ghash-generic", .cra_priority = 100, .cra_blocksize = GHASH_BLOCK_SIZE, .cra_ctxsize = sizeof(struct ghash_ctx), .cra_module = THIS_MODULE, .cra_exit = ghash_exit_tfm, }, }; static int __init ghash_mod_init(void) { return crypto_register_shash(&ghash_alg); } static void __exit ghash_mod_exit(void) { crypto_unregister_shash(&ghash_alg); } subsys_initcall(ghash_mod_init); module_exit(ghash_mod_exit); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("GHASH hash function"); MODULE_ALIAS_CRYPTO("ghash"); MODULE_ALIAS_CRYPTO("ghash-generic"); |
31 32 15 1 31 8 2 2 4 1 2 2 5 5 5 1 5 3 3 3 3 2 1 2 2 1 1 1 1 1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 | // SPDX-License-Identifier: GPL-2.0-or-later /* * * Copyright (C) Jonathan Naylor G4KLX (g4klx@g4klx.demon.co.uk) */ #include <linux/capability.h> #include <linux/errno.h> #include <linux/types.h> #include <linux/socket.h> #include <linux/in.h> #include <linux/kernel.h> #include <linux/timer.h> #include <linux/string.h> #include <linux/sockios.h> #include <linux/net.h> #include <linux/spinlock.h> #include <linux/slab.h> #include <net/ax25.h> #include <linux/inet.h> #include <linux/netdevice.h> #include <linux/if_arp.h> #include <linux/skbuff.h> #include <net/sock.h> #include <linux/uaccess.h> #include <linux/fcntl.h> #include <linux/mm.h> #include <linux/interrupt.h> #include <linux/list.h> #include <linux/notifier.h> #include <linux/proc_fs.h> #include <linux/seq_file.h> #include <linux/stat.h> #include <linux/sysctl.h> #include <linux/export.h> #include <net/ip.h> #include <net/arp.h> /* * Callsign/UID mapper. This is in kernel space for security on multi-amateur machines. */ static HLIST_HEAD(ax25_uid_list); static DEFINE_RWLOCK(ax25_uid_lock); int ax25_uid_policy; EXPORT_SYMBOL(ax25_uid_policy); ax25_uid_assoc *ax25_findbyuid(kuid_t uid) { ax25_uid_assoc *ax25_uid, *res = NULL; read_lock(&ax25_uid_lock); ax25_uid_for_each(ax25_uid, &ax25_uid_list) { if (uid_eq(ax25_uid->uid, uid)) { ax25_uid_hold(ax25_uid); res = ax25_uid; break; } } read_unlock(&ax25_uid_lock); return res; } EXPORT_SYMBOL(ax25_findbyuid); int ax25_uid_ioctl(int cmd, struct sockaddr_ax25 *sax) { ax25_uid_assoc *ax25_uid; ax25_uid_assoc *user; unsigned long res; switch (cmd) { case SIOCAX25GETUID: res = -ENOENT; read_lock(&ax25_uid_lock); ax25_uid_for_each(ax25_uid, &ax25_uid_list) { if (ax25cmp(&sax->sax25_call, &ax25_uid->call) == 0) { res = from_kuid_munged(current_user_ns(), ax25_uid->uid); break; } } read_unlock(&ax25_uid_lock); return res; case SIOCAX25ADDUID: { kuid_t sax25_kuid; if (!capable(CAP_NET_ADMIN)) return -EPERM; sax25_kuid = make_kuid(current_user_ns(), sax->sax25_uid); if (!uid_valid(sax25_kuid)) return -EINVAL; user = ax25_findbyuid(sax25_kuid); if (user) { ax25_uid_put(user); return -EEXIST; } if (sax->sax25_uid == 0) return -EINVAL; if ((ax25_uid = kmalloc(sizeof(*ax25_uid), GFP_KERNEL)) == NULL) return -ENOMEM; refcount_set(&ax25_uid->refcount, 1); ax25_uid->uid = sax25_kuid; ax25_uid->call = sax->sax25_call; write_lock(&ax25_uid_lock); hlist_add_head(&ax25_uid->uid_node, &ax25_uid_list); write_unlock(&ax25_uid_lock); return 0; } case SIOCAX25DELUID: if (!capable(CAP_NET_ADMIN)) return -EPERM; ax25_uid = NULL; write_lock(&ax25_uid_lock); ax25_uid_for_each(ax25_uid, &ax25_uid_list) { if (ax25cmp(&sax->sax25_call, &ax25_uid->call) == 0) break; } if (ax25_uid == NULL) { write_unlock(&ax25_uid_lock); return -ENOENT; } hlist_del_init(&ax25_uid->uid_node); ax25_uid_put(ax25_uid); write_unlock(&ax25_uid_lock); return 0; default: return -EINVAL; } return -EINVAL; /*NOTREACHED */ } #ifdef CONFIG_PROC_FS static void *ax25_uid_seq_start(struct seq_file *seq, loff_t *pos) __acquires(ax25_uid_lock) { read_lock(&ax25_uid_lock); return seq_hlist_start_head(&ax25_uid_list, *pos); } static void *ax25_uid_seq_next(struct seq_file *seq, void *v, loff_t *pos) { return seq_hlist_next(v, &ax25_uid_list, pos); } static void ax25_uid_seq_stop(struct seq_file *seq, void *v) __releases(ax25_uid_lock) { read_unlock(&ax25_uid_lock); } static int ax25_uid_seq_show(struct seq_file *seq, void *v) { char buf[11]; if (v == SEQ_START_TOKEN) seq_printf(seq, "Policy: %d\n", ax25_uid_policy); else { struct ax25_uid_assoc *pt; pt = hlist_entry(v, struct ax25_uid_assoc, uid_node); seq_printf(seq, "%6d %s\n", from_kuid_munged(seq_user_ns(seq), pt->uid), ax2asc(buf, &pt->call)); } return 0; } const struct seq_operations ax25_uid_seqops = { .start = ax25_uid_seq_start, .next = ax25_uid_seq_next, .stop = ax25_uid_seq_stop, .show = ax25_uid_seq_show, }; #endif /* * Free all memory associated with UID/Callsign structures. */ void __exit ax25_uid_free(void) { ax25_uid_assoc *ax25_uid; write_lock(&ax25_uid_lock); again: ax25_uid_for_each(ax25_uid, &ax25_uid_list) { hlist_del_init(&ax25_uid->uid_node); ax25_uid_put(ax25_uid); goto again; } write_unlock(&ax25_uid_lock); } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 | /* * Copyright (c) 2004-2011 Atheros Communications Inc. * * Permission to use, copy, modify, and/or distribute this software for any * purpose with or without fee is hereby granted, provided that the above * copyright notice and this permission notice appear in all copies. * * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. */ #ifndef HTC_OPS_H #define HTC_OPS_H #include "htc.h" #include "debug.h" static inline void *ath6kl_htc_create(struct ath6kl *ar) { return ar->htc_ops->create(ar); } static inline int ath6kl_htc_wait_target(struct htc_target *target) { return target->dev->ar->htc_ops->wait_target(target); } static inline int ath6kl_htc_start(struct htc_target *target) { return target->dev->ar->htc_ops->start(target); } static inline int ath6kl_htc_conn_service(struct htc_target *target, struct htc_service_connect_req *req, struct htc_service_connect_resp *resp) { return target->dev->ar->htc_ops->conn_service(target, req, resp); } static inline int ath6kl_htc_tx(struct htc_target *target, struct htc_packet *packet) { return target->dev->ar->htc_ops->tx(target, packet); } static inline void ath6kl_htc_stop(struct htc_target *target) { return target->dev->ar->htc_ops->stop(target); } static inline void ath6kl_htc_cleanup(struct htc_target *target) { return target->dev->ar->htc_ops->cleanup(target); } static inline void ath6kl_htc_flush_txep(struct htc_target *target, enum htc_endpoint_id endpoint, u16 tag) { return target->dev->ar->htc_ops->flush_txep(target, endpoint, tag); } static inline void ath6kl_htc_flush_rx_buf(struct htc_target *target) { return target->dev->ar->htc_ops->flush_rx_buf(target); } static inline void ath6kl_htc_activity_changed(struct htc_target *target, enum htc_endpoint_id endpoint, bool active) { return target->dev->ar->htc_ops->activity_changed(target, endpoint, active); } static inline int ath6kl_htc_get_rxbuf_num(struct htc_target *target, enum htc_endpoint_id endpoint) { return target->dev->ar->htc_ops->get_rxbuf_num(target, endpoint); } static inline int ath6kl_htc_add_rxbuf_multiple(struct htc_target *target, struct list_head *pktq) { return target->dev->ar->htc_ops->add_rxbuf_multiple(target, pktq); } static inline int ath6kl_htc_credit_setup(struct htc_target *target, struct ath6kl_htc_credit_info *info) { return target->dev->ar->htc_ops->credit_setup(target, info); } static inline void ath6kl_htc_tx_complete(struct ath6kl *ar, struct sk_buff *skb) { ar->htc_ops->tx_complete(ar, skb); } static inline void ath6kl_htc_rx_complete(struct ath6kl *ar, struct sk_buff *skb, u8 pipe) { ar->htc_ops->rx_complete(ar, skb, pipe); } #endif |
9 9 9 7 6 7 4 2 3 4 9 8 7 9 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 | // SPDX-License-Identifier: GPL-2.0-or-later /* * net/sched/sch_red.c Random Early Detection queue. * * Authors: Alexey Kuznetsov, <kuznet@ms2.inr.ac.ru> * * Changes: * J Hadi Salim 980914: computation fixes * Alexey Makarenko <makar@phoenix.kharkov.ua> 990814: qave on idle link was calculated incorrectly. * J Hadi Salim 980816: ECN support */ #include <linux/module.h> #include <linux/types.h> #include <linux/kernel.h> #include <linux/skbuff.h> #include <net/pkt_sched.h> #include <net/pkt_cls.h> #include <net/inet_ecn.h> #include <net/red.h> /* Parameters, settable by user: ----------------------------- limit - bytes (must be > qth_max + burst) Hard limit on queue length, should be chosen >qth_max to allow packet bursts. This parameter does not affect the algorithms behaviour and can be chosen arbitrarily high (well, less than ram size) Really, this limit will never be reached if RED works correctly. */ struct red_sched_data { u32 limit; /* HARD maximal queue length */ unsigned char flags; /* Non-flags in tc_red_qopt.flags. */ unsigned char userbits; struct timer_list adapt_timer; struct Qdisc *sch; struct red_parms parms; struct red_vars vars; struct red_stats stats; struct Qdisc *qdisc; struct tcf_qevent qe_early_drop; struct tcf_qevent qe_mark; }; #define TC_RED_SUPPORTED_FLAGS (TC_RED_HISTORIC_FLAGS | TC_RED_NODROP) static inline int red_use_ecn(struct red_sched_data *q) { return q->flags & TC_RED_ECN; } static inline int red_use_harddrop(struct red_sched_data *q) { return q->flags & TC_RED_HARDDROP; } static int red_use_nodrop(struct red_sched_data *q) { return q->flags & TC_RED_NODROP; } static int red_enqueue(struct sk_buff *skb, struct Qdisc *sch, struct sk_buff **to_free) { enum skb_drop_reason reason = SKB_DROP_REASON_QDISC_CONGESTED; struct red_sched_data *q = qdisc_priv(sch); struct Qdisc *child = q->qdisc; unsigned int len; int ret; q->vars.qavg = red_calc_qavg(&q->parms, &q->vars, child->qstats.backlog); if (red_is_idling(&q->vars)) red_end_of_idle_period(&q->vars); switch (red_action(&q->parms, &q->vars, q->vars.qavg)) { case RED_DONT_MARK: break; case RED_PROB_MARK: qdisc_qstats_overlimit(sch); if (!red_use_ecn(q)) { q->stats.prob_drop++; goto congestion_drop; } if (INET_ECN_set_ce(skb)) { q->stats.prob_mark++; skb = tcf_qevent_handle(&q->qe_mark, sch, skb, to_free, &ret); if (!skb) return NET_XMIT_CN | ret; } else if (!red_use_nodrop(q)) { q->stats.prob_drop++; goto congestion_drop; } /* Non-ECT packet in ECN nodrop mode: queue it. */ break; case RED_HARD_MARK: reason = SKB_DROP_REASON_QDISC_OVERLIMIT; qdisc_qstats_overlimit(sch); if (red_use_harddrop(q) || !red_use_ecn(q)) { q->stats.forced_drop++; goto congestion_drop; } if (INET_ECN_set_ce(skb)) { q->stats.forced_mark++; skb = tcf_qevent_handle(&q->qe_mark, sch, skb, to_free, &ret); if (!skb) return NET_XMIT_CN | ret; } else if (!red_use_nodrop(q)) { q->stats.forced_drop++; goto congestion_drop; } /* Non-ECT packet in ECN nodrop mode: queue it. */ break; } len = qdisc_pkt_len(skb); ret = qdisc_enqueue(skb, child, to_free); if (likely(ret == NET_XMIT_SUCCESS)) { sch->qstats.backlog += len; sch->q.qlen++; } else if (net_xmit_drop_count(ret)) { q->stats.pdrop++; qdisc_qstats_drop(sch); } return ret; congestion_drop: skb = tcf_qevent_handle(&q->qe_early_drop, sch, skb, to_free, &ret); if (!skb) return NET_XMIT_CN | ret; qdisc_drop_reason(skb, sch, to_free, reason); return NET_XMIT_CN; } static struct sk_buff *red_dequeue(struct Qdisc *sch) { struct sk_buff *skb; struct red_sched_data *q = qdisc_priv(sch); struct Qdisc *child = q->qdisc; skb = child->dequeue(child); if (skb) { qdisc_bstats_update(sch, skb); qdisc_qstats_backlog_dec(sch, skb); sch->q.qlen--; } else { if (!red_is_idling(&q->vars)) red_start_of_idle_period(&q->vars); } return skb; } static struct sk_buff *red_peek(struct Qdisc *sch) { struct red_sched_data *q = qdisc_priv(sch); struct Qdisc *child = q->qdisc; return child->ops->peek(child); } static void red_reset(struct Qdisc *sch) { struct red_sched_data *q = qdisc_priv(sch); qdisc_reset(q->qdisc); red_restart(&q->vars); } static int red_offload(struct Qdisc *sch, bool enable) { struct red_sched_data *q = qdisc_priv(sch); struct net_device *dev = qdisc_dev(sch); struct tc_red_qopt_offload opt = { .handle = sch->handle, .parent = sch->parent, }; if (!tc_can_offload(dev) || !dev->netdev_ops->ndo_setup_tc) return -EOPNOTSUPP; if (enable) { opt.command = TC_RED_REPLACE; opt.set.min = q->parms.qth_min >> q->parms.Wlog; opt.set.max = q->parms.qth_max >> q->parms.Wlog; opt.set.probability = q->parms.max_P; opt.set.limit = q->limit; opt.set.is_ecn = red_use_ecn(q); opt.set.is_harddrop = red_use_harddrop(q); opt.set.is_nodrop = red_use_nodrop(q); opt.set.qstats = &sch->qstats; } else { opt.command = TC_RED_DESTROY; } return dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED, &opt); } static void red_destroy(struct Qdisc *sch) { struct red_sched_data *q = qdisc_priv(sch); tcf_qevent_destroy(&q->qe_mark, sch); tcf_qevent_destroy(&q->qe_early_drop, sch); timer_delete_sync(&q->adapt_timer); red_offload(sch, false); qdisc_put(q->qdisc); } static const struct nla_policy red_policy[TCA_RED_MAX + 1] = { [TCA_RED_UNSPEC] = { .strict_start_type = TCA_RED_FLAGS }, [TCA_RED_PARMS] = { .len = sizeof(struct tc_red_qopt) }, [TCA_RED_STAB] = { .len = RED_STAB_SIZE }, [TCA_RED_MAX_P] = { .type = NLA_U32 }, [TCA_RED_FLAGS] = NLA_POLICY_BITFIELD32(TC_RED_SUPPORTED_FLAGS), [TCA_RED_EARLY_DROP_BLOCK] = { .type = NLA_U32 }, [TCA_RED_MARK_BLOCK] = { .type = NLA_U32 }, }; static int __red_change(struct Qdisc *sch, struct nlattr **tb, struct netlink_ext_ack *extack) { struct Qdisc *old_child = NULL, *child = NULL; struct red_sched_data *q = qdisc_priv(sch); struct nla_bitfield32 flags_bf; struct tc_red_qopt *ctl; unsigned char userbits; unsigned char flags; int err; u32 max_P; u8 *stab; if (tb[TCA_RED_PARMS] == NULL || tb[TCA_RED_STAB] == NULL) return -EINVAL; max_P = nla_get_u32_default(tb[TCA_RED_MAX_P], 0); ctl = nla_data(tb[TCA_RED_PARMS]); stab = nla_data(tb[TCA_RED_STAB]); if (!red_check_params(ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Scell_log, stab)) return -EINVAL; err = red_get_flags(ctl->flags, TC_RED_HISTORIC_FLAGS, tb[TCA_RED_FLAGS], TC_RED_SUPPORTED_FLAGS, &flags_bf, &userbits, extack); if (err) return err; if (ctl->limit > 0) { child = fifo_create_dflt(sch, &bfifo_qdisc_ops, ctl->limit, extack); if (IS_ERR(child)) return PTR_ERR(child); /* child is fifo, no need to check for noop_qdisc */ qdisc_hash_add(child, true); } sch_tree_lock(sch); flags = (q->flags & ~flags_bf.selector) | flags_bf.value; err = red_validate_flags(flags, extack); if (err) goto unlock_out; q->flags = flags; q->userbits = userbits; q->limit = ctl->limit; if (child) { qdisc_tree_flush_backlog(q->qdisc); old_child = q->qdisc; q->qdisc = child; } red_set_parms(&q->parms, ctl->qth_min, ctl->qth_max, ctl->Wlog, ctl->Plog, ctl->Scell_log, stab, max_P); red_set_vars(&q->vars); timer_delete(&q->adapt_timer); if (ctl->flags & TC_RED_ADAPTATIVE) mod_timer(&q->adapt_timer, jiffies + HZ/2); if (!q->qdisc->q.qlen) red_start_of_idle_period(&q->vars); sch_tree_unlock(sch); red_offload(sch, true); if (old_child) qdisc_put(old_child); return 0; unlock_out: sch_tree_unlock(sch); if (child) qdisc_put(child); return err; } static inline void red_adaptative_timer(struct timer_list *t) { struct red_sched_data *q = from_timer(q, t, adapt_timer); struct Qdisc *sch = q->sch; spinlock_t *root_lock; rcu_read_lock(); root_lock = qdisc_lock(qdisc_root_sleeping(sch)); spin_lock(root_lock); red_adaptative_algo(&q->parms, &q->vars); mod_timer(&q->adapt_timer, jiffies + HZ/2); spin_unlock(root_lock); rcu_read_unlock(); } static int red_init(struct Qdisc *sch, struct nlattr *opt, struct netlink_ext_ack *extack) { struct red_sched_data *q = qdisc_priv(sch); struct nlattr *tb[TCA_RED_MAX + 1]; int err; q->qdisc = &noop_qdisc; q->sch = sch; timer_setup(&q->adapt_timer, red_adaptative_timer, 0); if (!opt) return -EINVAL; err = nla_parse_nested_deprecated(tb, TCA_RED_MAX, opt, red_policy, extack); if (err < 0) return err; err = __red_change(sch, tb, extack); if (err) return err; err = tcf_qevent_init(&q->qe_early_drop, sch, FLOW_BLOCK_BINDER_TYPE_RED_EARLY_DROP, tb[TCA_RED_EARLY_DROP_BLOCK], extack); if (err) return err; return tcf_qevent_init(&q->qe_mark, sch, FLOW_BLOCK_BINDER_TYPE_RED_MARK, tb[TCA_RED_MARK_BLOCK], extack); } static int red_change(struct Qdisc *sch, struct nlattr *opt, struct netlink_ext_ack *extack) { struct red_sched_data *q = qdisc_priv(sch); struct nlattr *tb[TCA_RED_MAX + 1]; int err; err = nla_parse_nested_deprecated(tb, TCA_RED_MAX, opt, red_policy, extack); if (err < 0) return err; err = tcf_qevent_validate_change(&q->qe_early_drop, tb[TCA_RED_EARLY_DROP_BLOCK], extack); if (err) return err; err = tcf_qevent_validate_change(&q->qe_mark, tb[TCA_RED_MARK_BLOCK], extack); if (err) return err; return __red_change(sch, tb, extack); } static int red_dump_offload_stats(struct Qdisc *sch) { struct tc_red_qopt_offload hw_stats = { .command = TC_RED_STATS, .handle = sch->handle, .parent = sch->parent, { .stats.bstats = &sch->bstats, .stats.qstats = &sch->qstats, }, }; return qdisc_offload_dump_helper(sch, TC_SETUP_QDISC_RED, &hw_stats); } static int red_dump(struct Qdisc *sch, struct sk_buff *skb) { struct red_sched_data *q = qdisc_priv(sch); struct nlattr *opts = NULL; struct tc_red_qopt opt = { .limit = q->limit, .flags = (q->flags & TC_RED_HISTORIC_FLAGS) | q->userbits, .qth_min = q->parms.qth_min >> q->parms.Wlog, .qth_max = q->parms.qth_max >> q->parms.Wlog, .Wlog = q->parms.Wlog, .Plog = q->parms.Plog, .Scell_log = q->parms.Scell_log, }; int err; err = red_dump_offload_stats(sch); if (err) goto nla_put_failure; opts = nla_nest_start_noflag(skb, TCA_OPTIONS); if (opts == NULL) goto nla_put_failure; if (nla_put(skb, TCA_RED_PARMS, sizeof(opt), &opt) || nla_put_u32(skb, TCA_RED_MAX_P, q->parms.max_P) || nla_put_bitfield32(skb, TCA_RED_FLAGS, q->flags, TC_RED_SUPPORTED_FLAGS) || tcf_qevent_dump(skb, TCA_RED_MARK_BLOCK, &q->qe_mark) || tcf_qevent_dump(skb, TCA_RED_EARLY_DROP_BLOCK, &q->qe_early_drop)) goto nla_put_failure; return nla_nest_end(skb, opts); nla_put_failure: nla_nest_cancel(skb, opts); return -EMSGSIZE; } static int red_dump_stats(struct Qdisc *sch, struct gnet_dump *d) { struct red_sched_data *q = qdisc_priv(sch); struct net_device *dev = qdisc_dev(sch); struct tc_red_xstats st = {0}; if (sch->flags & TCQ_F_OFFLOADED) { struct tc_red_qopt_offload hw_stats_request = { .command = TC_RED_XSTATS, .handle = sch->handle, .parent = sch->parent, { .xstats = &q->stats, }, }; dev->netdev_ops->ndo_setup_tc(dev, TC_SETUP_QDISC_RED, &hw_stats_request); } st.early = q->stats.prob_drop + q->stats.forced_drop; st.pdrop = q->stats.pdrop; st.marked = q->stats.prob_mark + q->stats.forced_mark; return gnet_stats_copy_app(d, &st, sizeof(st)); } static int red_dump_class(struct Qdisc *sch, unsigned long cl, struct sk_buff *skb, struct tcmsg *tcm) { struct red_sched_data *q = qdisc_priv(sch); tcm->tcm_handle |= TC_H_MIN(1); tcm->tcm_info = q->qdisc->handle; return 0; } static void red_graft_offload(struct Qdisc *sch, struct Qdisc *new, struct Qdisc *old, struct netlink_ext_ack *extack) { struct tc_red_qopt_offload graft_offload = { .handle = sch->handle, .parent = sch->parent, .child_handle = new->handle, .command = TC_RED_GRAFT, }; qdisc_offload_graft_helper(qdisc_dev(sch), sch, new, old, TC_SETUP_QDISC_RED, &graft_offload, extack); } static int red_graft(struct Qdisc *sch, unsigned long arg, struct Qdisc *new, struct Qdisc **old, struct netlink_ext_ack *extack) { struct red_sched_data *q = qdisc_priv(sch); if (new == NULL) new = &noop_qdisc; *old = qdisc_replace(sch, new, &q->qdisc); red_graft_offload(sch, new, *old, extack); return 0; } static struct Qdisc *red_leaf(struct Qdisc *sch, unsigned long arg) { struct red_sched_data *q = qdisc_priv(sch); return q->qdisc; } static unsigned long red_find(struct Qdisc *sch, u32 classid) { return 1; } static void red_walk(struct Qdisc *sch, struct qdisc_walker *walker) { if (!walker->stop) { tc_qdisc_stats_dump(sch, 1, walker); } } static const struct Qdisc_class_ops red_class_ops = { .graft = red_graft, .leaf = red_leaf, .find = red_find, .walk = red_walk, .dump = red_dump_class, }; static struct Qdisc_ops red_qdisc_ops __read_mostly = { .id = "red", .priv_size = sizeof(struct red_sched_data), .cl_ops = &red_class_ops, .enqueue = red_enqueue, .dequeue = red_dequeue, .peek = red_peek, .init = red_init, .reset = red_reset, .destroy = red_destroy, .change = red_change, .dump = red_dump, .dump_stats = red_dump_stats, .owner = THIS_MODULE, }; MODULE_ALIAS_NET_SCH("red"); static int __init red_module_init(void) { return register_qdisc(&red_qdisc_ops); } static void __exit red_module_exit(void) { unregister_qdisc(&red_qdisc_ops); } module_init(red_module_init) module_exit(red_module_exit) MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("Random Early Detection qdisc"); |
198 223 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 | /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _NF_CONNTRACK_TSTAMP_H #define _NF_CONNTRACK_TSTAMP_H #include <net/net_namespace.h> #include <linux/netfilter/nf_conntrack_common.h> #include <linux/netfilter/nf_conntrack_tuple_common.h> #include <net/netfilter/nf_conntrack.h> #include <net/netfilter/nf_conntrack_extend.h> struct nf_conn_tstamp { u_int64_t start; u_int64_t stop; }; static inline struct nf_conn_tstamp *nf_conn_tstamp_find(const struct nf_conn *ct) { #ifdef CONFIG_NF_CONNTRACK_TIMESTAMP return nf_ct_ext_find(ct, NF_CT_EXT_TSTAMP); #else return NULL; #endif } static inline struct nf_conn_tstamp *nf_ct_tstamp_ext_add(struct nf_conn *ct, gfp_t gfp) { #ifdef CONFIG_NF_CONNTRACK_TIMESTAMP struct net *net = nf_ct_net(ct); if (!net->ct.sysctl_tstamp) return NULL; return nf_ct_ext_add(ct, NF_CT_EXT_TSTAMP, gfp); #else return NULL; #endif }; #ifdef CONFIG_NF_CONNTRACK_TIMESTAMP void nf_conntrack_tstamp_pernet_init(struct net *net); #else static inline void nf_conntrack_tstamp_pernet_init(struct net *net) {} #endif /* CONFIG_NF_CONNTRACK_TIMESTAMP */ #endif /* _NF_CONNTRACK_TSTAMP_H */ |
7 20 20 19 20 7 20 7 7 7 7 7 7 7 1 7 7 7 1 1 1 1 7 7 16 20 19 1 20 20 4 26 20 22 22 22 16 22 22 22 22 22 20 20 5 1 1 1 17 3 20 20 13 9 7 1 6 7 7 5 5 5 7 20 26 27 27 9 1 2 2 14 1 6 2 1 4 3 2 1 1 28 28 28 28 28 28 28 27 27 28 21 21 1 1 20 20 20 20 20 2 20 5 20 20 16 16 16 2 1 1 15 1 1 1 16 5 4 5 6 16 12 12 12 28 22 21 22 21 22 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4184 4185 4186 4187 4188 4189 4190 4191 4192 4193 4194 4195 4196 4197 4198 4199 4200 4201 4202 4203 4204 4205 4206 4207 4208 4209 4210 4211 4212 4213 4214 4215 4216 4217 4218 4219 4220 4221 4222 4223 4224 4225 4226 4227 4228 4229 4230 4231 4232 4233 4234 4235 4236 4237 4238 4239 4240 4241 4242 4243 | // SPDX-License-Identifier: GPL-2.0+ // // em28xx-cards.c - driver for Empia EM2800/EM2820/2840 USB // video capture devices // // Copyright (C) 2005 Ludovico Cavedon <cavedon@sssup.it> // Markus Rechberger <mrechberger@gmail.com> // Mauro Carvalho Chehab <mchehab@kernel.org> // Sascha Sommer <saschasommer@freenet.de> // Copyright (C) 2012 Frank Schäfer <fschaefer.oss@googlemail.com> #include "em28xx.h" #include <linux/init.h> #include <linux/module.h> #include <linux/slab.h> #include <linux/delay.h> #include <linux/i2c.h> #include <linux/usb.h> #include <media/tuner.h> #include <media/drv-intf/msp3400.h> #include <media/i2c/saa7115.h> #include <dt-bindings/media/tvp5150.h> #include <media/i2c/tvaudio.h> #include <media/tveeprom.h> #include <media/v4l2-common.h> #include <sound/ac97_codec.h> #define DRIVER_NAME "em28xx" static int tuner = -1; module_param(tuner, int, 0444); MODULE_PARM_DESC(tuner, "tuner type"); static unsigned int disable_ir; module_param(disable_ir, int, 0444); MODULE_PARM_DESC(disable_ir, "disable infrared remote support"); static unsigned int disable_usb_speed_check; module_param(disable_usb_speed_check, int, 0444); MODULE_PARM_DESC(disable_usb_speed_check, "override min bandwidth requirement of 480M bps"); static unsigned int card[] = {[0 ... (EM28XX_MAXBOARDS - 1)] = -1U }; module_param_array(card, int, NULL, 0444); MODULE_PARM_DESC(card, "card type"); static int usb_xfer_mode = -1; module_param(usb_xfer_mode, int, 0444); MODULE_PARM_DESC(usb_xfer_mode, "USB transfer mode for frame data (-1 = auto, 0 = prefer isoc, 1 = prefer bulk)"); /* Bitmask marking allocated devices from 0 to EM28XX_MAXBOARDS - 1 */ static DECLARE_BITMAP(em28xx_devused, EM28XX_MAXBOARDS); struct em28xx_hash_table { unsigned long hash; unsigned int model; unsigned int tuner; }; static void em28xx_pre_card_setup(struct em28xx *dev); /* * Reset sequences for analog/digital modes */ /* Reset for the most [analog] boards */ static const struct em28xx_reg_seq default_analog[] = { {EM2820_R08_GPIO_CTRL, 0x6d, ~EM_GPIO_4, 10}, { -1, -1, -1, -1}, }; /* Reset for the most [digital] boards */ static const struct em28xx_reg_seq default_digital[] = { {EM2820_R08_GPIO_CTRL, 0x6e, ~EM_GPIO_4, 10}, { -1, -1, -1, -1}, }; /* Board :Zolid Hybrid Tv Stick */ static struct em28xx_reg_seq zolid_tuner[] = { {EM2820_R08_GPIO_CTRL, 0xfd, 0xff, 100}, {EM2820_R08_GPIO_CTRL, 0xfe, 0xff, 100}, { -1, -1, -1, -1}, }; static struct em28xx_reg_seq zolid_digital[] = { {EM2820_R08_GPIO_CTRL, 0x6a, 0xff, 100}, {EM2820_R08_GPIO_CTRL, 0x7a, 0xff, 100}, {EM2880_R04_GPO, 0x04, 0xff, 100}, {EM2880_R04_GPO, 0x0c, 0xff, 100}, { -1, -1, -1, -1}, }; /* Board Hauppauge WinTV HVR 900 analog */ static const struct em28xx_reg_seq hauppauge_wintv_hvr_900_analog[] = { {EM2820_R08_GPIO_CTRL, 0x2d, ~EM_GPIO_4, 10}, { 0x05, 0xff, 0x10, 10}, { -1, -1, -1, -1}, }; /* Board Hauppauge WinTV HVR 900 digital */ static const struct em28xx_reg_seq hauppauge_wintv_hvr_900_digital[] = { {EM2820_R08_GPIO_CTRL, 0x2e, ~EM_GPIO_4, 10}, {EM2880_R04_GPO, 0x04, 0x0f, 10}, {EM2880_R04_GPO, 0x0c, 0x0f, 10}, { -1, -1, -1, -1}, }; /* Board Hauppauge WinTV HVR 900 (R2) digital */ static const struct em28xx_reg_seq hauppauge_wintv_hvr_900R2_digital[] = { {EM2820_R08_GPIO_CTRL, 0x2e, ~EM_GPIO_4, 10}, {EM2880_R04_GPO, 0x0c, 0x0f, 10}, { -1, -1, -1, -1}, }; /* Boards - EM2880 MSI DIGIVOX AD and EM2880_BOARD_MSI_DIGIVOX_AD_II */ static const struct em28xx_reg_seq em2880_msi_digivox_ad_analog[] = { {EM2820_R08_GPIO_CTRL, 0x69, ~EM_GPIO_4, 10}, { -1, -1, -1, -1}, }; /* Board - EM2882 Kworld 315U digital */ static const struct em28xx_reg_seq em2882_kworld_315u_digital[] = { {EM2820_R08_GPIO_CTRL, 0xff, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0xfe, 0xff, 10}, {EM2880_R04_GPO, 0x04, 0xff, 10}, {EM2880_R04_GPO, 0x0c, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0x7e, 0xff, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq em2882_kworld_315u_tuner_gpio[] = { {EM2880_R04_GPO, 0x08, 0xff, 10}, {EM2880_R04_GPO, 0x0c, 0xff, 10}, {EM2880_R04_GPO, 0x08, 0xff, 10}, {EM2880_R04_GPO, 0x0c, 0xff, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq kworld_330u_analog[] = { {EM2820_R08_GPIO_CTRL, 0x6d, ~EM_GPIO_4, 10}, {EM2880_R04_GPO, 0x00, 0xff, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq kworld_330u_digital[] = { {EM2820_R08_GPIO_CTRL, 0x6e, ~EM_GPIO_4, 10}, {EM2880_R04_GPO, 0x08, 0xff, 10}, { -1, -1, -1, -1}, }; /* * Evga inDtube * GPIO0 - Enable digital power (s5h1409) - low to enable * GPIO1 - Enable analog power (tvp5150/emp202) - low to enable * GPIO4 - xc3028 reset * GOP3 - s5h1409 reset */ static const struct em28xx_reg_seq evga_indtube_analog[] = { {EM2820_R08_GPIO_CTRL, 0x79, 0xff, 60}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq evga_indtube_digital[] = { {EM2820_R08_GPIO_CTRL, 0x7a, 0xff, 1}, {EM2880_R04_GPO, 0x04, 0xff, 10}, {EM2880_R04_GPO, 0x0c, 0xff, 1}, { -1, -1, -1, -1}, }; /* * KWorld PlusTV 340U, UB435-Q and UB435-Q V2 (ATSC) GPIOs map: * EM_GPIO_0 - currently unknown * EM_GPIO_1 - LED disable/enable (1 = off, 0 = on) * EM_GPIO_2 - currently unknown * EM_GPIO_3 - currently unknown * EM_GPIO_4 - TDA18271HD/C1 tuner (1 = active, 0 = in reset) * EM_GPIO_5 - LGDT3304 ATSC/QAM demod (1 = active, 0 = in reset) * EM_GPIO_6 - currently unknown * EM_GPIO_7 - currently unknown */ static const struct em28xx_reg_seq kworld_a340_digital[] = { {EM2820_R08_GPIO_CTRL, 0x6d, ~EM_GPIO_4, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq kworld_ub435q_v3_digital[] = { {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xfe, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xbe, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xfe, 0xff, 100}, { -1, -1, -1, -1}, }; /* Pinnacle Hybrid Pro eb1a:2881 */ static const struct em28xx_reg_seq pinnacle_hybrid_pro_analog[] = { {EM2820_R08_GPIO_CTRL, 0xfd, ~EM_GPIO_4, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq pinnacle_hybrid_pro_digital[] = { {EM2820_R08_GPIO_CTRL, 0x6e, ~EM_GPIO_4, 10}, {EM2880_R04_GPO, 0x04, 0xff, 100},/* zl10353 reset */ {EM2880_R04_GPO, 0x0c, 0xff, 1}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq terratec_cinergy_USB_XS_FR_analog[] = { {EM2820_R08_GPIO_CTRL, 0x6d, ~EM_GPIO_4, 10}, {EM2880_R04_GPO, 0x00, 0xff, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq terratec_cinergy_USB_XS_FR_digital[] = { {EM2820_R08_GPIO_CTRL, 0x6e, ~EM_GPIO_4, 10}, {EM2880_R04_GPO, 0x08, 0xff, 10}, { -1, -1, -1, -1}, }; /* * PCTV HD Mini (80e) GPIOs * 0-5: not used * 6: demod reset, active low * 7: LED on, active high */ static const struct em28xx_reg_seq em2874_pctv_80e_digital[] = { {EM28XX_R06_I2C_CLK, 0x45, 0xff, 10}, /*400 KHz*/ {EM2874_R80_GPIO_P0_CTRL, 0x00, 0xff, 100},/*Demod reset*/ {EM2874_R80_GPIO_P0_CTRL, 0x40, 0xff, 10}, { -1, -1, -1, -1}, }; /* * eb1a:2868 Reddo DVB-C USB TV Box * GPIO4 - CU1216L NIM * Other GPIOs seems to be don't care. */ static const struct em28xx_reg_seq reddo_dvb_c_usb_box[] = { {EM2820_R08_GPIO_CTRL, 0xfe, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0xde, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0xfe, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0xff, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0x7f, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0x6f, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0xff, 0xff, 10}, { -1, -1, -1, -1}, }; /* Callback for the most boards */ static const struct em28xx_reg_seq default_tuner_gpio[] = { {EM2820_R08_GPIO_CTRL, EM_GPIO_4, EM_GPIO_4, 10}, {EM2820_R08_GPIO_CTRL, 0, EM_GPIO_4, 10}, {EM2820_R08_GPIO_CTRL, EM_GPIO_4, EM_GPIO_4, 10}, { -1, -1, -1, -1}, }; /* Mute/unmute */ static const struct em28xx_reg_seq compro_unmute_tv_gpio[] = { {EM2820_R08_GPIO_CTRL, 5, 7, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq compro_unmute_svid_gpio[] = { {EM2820_R08_GPIO_CTRL, 4, 7, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq compro_mute_gpio[] = { {EM2820_R08_GPIO_CTRL, 6, 7, 10}, { -1, -1, -1, -1}, }; /* Terratec AV350 */ static const struct em28xx_reg_seq terratec_av350_mute_gpio[] = { {EM2820_R08_GPIO_CTRL, 0xff, 0x7f, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq terratec_av350_unmute_gpio[] = { {EM2820_R08_GPIO_CTRL, 0xff, 0xff, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq silvercrest_reg_seq[] = { {EM2820_R08_GPIO_CTRL, 0xff, 0xff, 10}, {EM2820_R08_GPIO_CTRL, 0x01, 0xf7, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq vc211a_enable[] = { {EM2820_R08_GPIO_CTRL, 0xff, 0x07, 10}, {EM2820_R08_GPIO_CTRL, 0xff, 0x0f, 10}, {EM2820_R08_GPIO_CTRL, 0xff, 0x0b, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq dikom_dk300_digital[] = { {EM2820_R08_GPIO_CTRL, 0x6e, ~EM_GPIO_4, 10}, {EM2880_R04_GPO, 0x08, 0xff, 10}, { -1, -1, -1, -1}, }; /* Reset for the most [digital] boards */ static const struct em28xx_reg_seq leadership_digital[] = { {EM2874_R80_GPIO_P0_CTRL, 0x70, 0xff, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq leadership_reset[] = { {EM2874_R80_GPIO_P0_CTRL, 0xf0, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xb0, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xf0, 0xff, 10}, { -1, -1, -1, -1}, }; /* * 2013:024f PCTV nanoStick T2 290e * GPIO_6 - demod reset * GPIO_7 - LED */ static const struct em28xx_reg_seq pctv_290e[] = { {EM2874_R80_GPIO_P0_CTRL, 0x00, 0xff, 80}, {EM2874_R80_GPIO_P0_CTRL, 0x40, 0xff, 80}, /* GPIO_6 = 1 */ {EM2874_R80_GPIO_P0_CTRL, 0xc0, 0xff, 80}, /* GPIO_7 = 1 */ { -1, -1, -1, -1}, }; #if 0 static const struct em28xx_reg_seq terratec_h5_gpio[] = { {EM2820_R08_GPIO_CTRL, 0xff, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xf6, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xf2, 0xff, 50}, {EM2874_R80_GPIO_P0_CTRL, 0xf6, 0xff, 50}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq terratec_h5_digital[] = { {EM2874_R80_GPIO_P0_CTRL, 0xf6, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xe6, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xa6, 0xff, 10}, { -1, -1, -1, -1}, }; #endif /* * 2013:024f PCTV DVB-S2 Stick 460e * GPIO_0 - POWER_ON * GPIO_1 - BOOST * GPIO_2 - VUV_LNB (red LED) * GPIO_3 - EXT_12V * GPIO_4 - INT_DEM (DEMOD GPIO_0) * GPIO_5 - INT_LNB * GPIO_6 - RESET_DEM * GPIO_7 - LED (green LED) */ static const struct em28xx_reg_seq pctv_460e[] = { {EM2874_R80_GPIO_P0_CTRL, 0x01, 0xff, 50}, { 0x0d, 0xff, 0xff, 50}, {EM2874_R80_GPIO_P0_CTRL, 0x41, 0xff, 50}, /* GPIO_6=1 */ { 0x0d, 0x42, 0xff, 50}, {EM2874_R80_GPIO_P0_CTRL, 0x61, 0xff, 50}, /* GPIO_5=1 */ { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq c3tech_digital_duo_digital[] = { {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xfd, 0xff, 10}, /* xc5000 reset */ {EM2874_R80_GPIO_P0_CTRL, 0xf9, 0xff, 35}, {EM2874_R80_GPIO_P0_CTRL, 0xfd, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xfe, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xbe, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xfe, 0xff, 20}, { -1, -1, -1, -1}, }; /* * 2013:0258 PCTV DVB-S2 Stick (461e) * GPIO 0 = POWER_ON * GPIO 1 = BOOST * GPIO 2 = VUV_LNB (red LED) * GPIO 3 = #EXT_12V * GPIO 4 = INT_DEM * GPIO 5 = INT_LNB * GPIO 6 = #RESET_DEM * GPIO 7 = P07_LED (green LED) */ static const struct em28xx_reg_seq pctv_461e[] = { {EM2874_R80_GPIO_P0_CTRL, 0x7f, 0xff, 0}, {0x0d, 0xff, 0xff, 0}, {EM2874_R80_GPIO_P0_CTRL, 0x3f, 0xff, 100}, /* reset demod */ {EM2874_R80_GPIO_P0_CTRL, 0x7f, 0xff, 200}, /* reset demod */ {0x0d, 0x42, 0xff, 0}, {EM2874_R80_GPIO_P0_CTRL, 0xeb, 0xff, 0}, {EM2874_R5F_TS_ENABLE, 0x84, 0x84, 0}, /* parallel? | null discard */ { -1, -1, -1, -1}, }; #if 0 static const struct em28xx_reg_seq hauppauge_930c_gpio[] = { {EM2874_R80_GPIO_P0_CTRL, 0x6f, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0x4f, 0xff, 10}, /* xc5000 reset */ {EM2874_R80_GPIO_P0_CTRL, 0x6f, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0x4f, 0xff, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq hauppauge_930c_digital[] = { {EM2874_R80_GPIO_P0_CTRL, 0xf6, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xe6, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xa6, 0xff, 10}, { -1, -1, -1, -1}, }; #endif /* * 1b80:e425 MaxMedia UB425-TC * 1b80:e1cc Delock 61959 * GPIO_6 - demod reset, 0=active * GPIO_7 - LED, 0=active */ static const struct em28xx_reg_seq maxmedia_ub425_tc[] = { {EM2874_R80_GPIO_P0_CTRL, 0x83, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xc3, 0xff, 100}, /* GPIO_6 = 1 */ {EM2874_R80_GPIO_P0_CTRL, 0x43, 0xff, 000}, /* GPIO_7 = 0 */ { -1, -1, -1, -1}, }; /* * 2304:0242 PCTV QuatroStick (510e) * GPIO_2: decoder reset, 0=active * GPIO_4: decoder suspend, 0=active * GPIO_6: demod reset, 0=active * GPIO_7: LED, 1=active */ static const struct em28xx_reg_seq pctv_510e[] = { {EM2874_R80_GPIO_P0_CTRL, 0x10, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0x14, 0xff, 100}, /* GPIO_2 = 1 */ {EM2874_R80_GPIO_P0_CTRL, 0x54, 0xff, 050}, /* GPIO_6 = 1 */ { -1, -1, -1, -1}, }; /* * 2013:0251 PCTV QuatroStick nano (520e) * GPIO_2: decoder reset, 0=active * GPIO_4: decoder suspend, 0=active * GPIO_6: demod reset, 0=active * GPIO_7: LED, 1=active */ static const struct em28xx_reg_seq pctv_520e[] = { {EM2874_R80_GPIO_P0_CTRL, 0x10, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0x14, 0xff, 100}, /* GPIO_2 = 1 */ {EM2874_R80_GPIO_P0_CTRL, 0x54, 0xff, 050}, /* GPIO_6 = 1 */ {EM2874_R80_GPIO_P0_CTRL, 0xd4, 0xff, 000}, /* GPIO_7 = 1 */ { -1, -1, -1, -1}, }; /* * 1ae7:9003/9004 SpeedLink Vicious And Devine Laplace webcam * reg 0x80/0x84: * GPIO_0: capturing LED, 0=on, 1=off * GPIO_2: AV mute button, 0=pressed, 1=unpressed * GPIO 3: illumination button, 0=pressed, 1=unpressed * GPIO_6: illumination/flash LED, 0=on, 1=off * reg 0x81/0x85: * GPIO_7: snapshot button, 0=pressed, 1=unpressed */ static const struct em28xx_reg_seq speedlink_vad_laplace_reg_seq[] = { {EM2820_R08_GPIO_CTRL, 0xf7, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xb2, 10}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq pctv_292e[] = { {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 0}, {0x0d, 0xff, 0xff, 950}, {EM2874_R80_GPIO_P0_CTRL, 0xbd, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xfd, 0xff, 410}, {EM2874_R80_GPIO_P0_CTRL, 0x7d, 0xff, 300}, {EM2874_R80_GPIO_P0_CTRL, 0x7c, 0xff, 60}, {0x0d, 0x42, 0xff, 50}, {EM2874_R5F_TS_ENABLE, 0x85, 0xff, 0}, {-1, -1, -1, -1}, }; static const struct em28xx_reg_seq terratec_t2_stick_hd[] = { {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 0}, {0x0d, 0xff, 0xff, 600}, {EM2874_R80_GPIO_P0_CTRL, 0xfc, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xbc, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xfc, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0x00, 0xff, 300}, {EM2874_R80_GPIO_P0_CTRL, 0xf8, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xfc, 0xff, 300}, {0x0d, 0x42, 0xff, 1000}, {EM2874_R5F_TS_ENABLE, 0x85, 0xff, 0}, {-1, -1, -1, -1}, }; static const struct em28xx_reg_seq plex_px_bcud[] = { {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 0}, {0x0d, 0xff, 0xff, 0}, {EM2874_R50_IR_CONFIG, 0x01, 0xff, 0}, {EM28XX_R06_I2C_CLK, 0x40, 0xff, 0}, {EM2874_R80_GPIO_P0_CTRL, 0xfd, 0xff, 100}, {EM28XX_R12_VINENABLE, 0x20, 0x20, 0}, {0x0d, 0x42, 0xff, 1000}, {EM2874_R80_GPIO_P0_CTRL, 0xfc, 0xff, 10}, {EM2874_R80_GPIO_P0_CTRL, 0xfd, 0xff, 10}, {0x73, 0xfd, 0xff, 100}, {-1, -1, -1, -1}, }; /* * 2040:0265 Hauppauge WinTV-dualHD DVB Isoc * 2040:8265 Hauppauge WinTV-dualHD DVB Bulk * 2040:026d Hauppauge WinTV-dualHD ATSC/QAM Isoc * 2040:826d Hauppauge WinTV-dualHD ATSC/QAM Bulk * reg 0x80/0x84: * GPIO_0: Yellow LED tuner 1, 0=on, 1=off * GPIO_1: Green LED tuner 1, 0=on, 1=off * GPIO_2: Yellow LED tuner 2, 0=on, 1=off * GPIO_3: Green LED tuner 2, 0=on, 1=off * GPIO_5: Reset #2, 0=active * GPIO_6: Reset #1, 0=active */ static const struct em28xx_reg_seq hauppauge_dualhd_dvb[] = { {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 0}, {0x0d, 0xff, 0xff, 200}, {0x50, 0x04, 0xff, 300}, {EM2874_R80_GPIO_P0_CTRL, 0xbf, 0xff, 100}, /* demod 1 reset */ {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xdf, 0xff, 100}, /* demod 2 reset */ {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 100}, {EM2874_R5F_TS_ENABLE, 0x00, 0xff, 50}, /* disable TS filters */ {EM2874_R5D_TS1_PKT_SIZE, 0x05, 0xff, 50}, {EM2874_R5E_TS2_PKT_SIZE, 0x05, 0xff, 50}, {-1, -1, -1, -1}, }; /* Hauppauge USB QuadHD */ static struct em28xx_reg_seq hauppauge_usb_quadhd_atsc_reg_seq[] = { {EM2874_R80_GPIO_P0_CTRL, 0xff, 0xff, 0}, {0x0d, 0xff, 0xff, 200}, {0x50, 0x04, 0xff, 300}, {EM2874_R80_GPIO_P0_CTRL, 0xb0, 0xf0, 100}, /* demod 1 reset */ {EM2874_R80_GPIO_P0_CTRL, 0xf0, 0xf0, 100}, {EM2874_R80_GPIO_P0_CTRL, 0xd0, 0xf0, 100}, /* demod 2 reset */ {EM2874_R80_GPIO_P0_CTRL, 0xf0, 0xf0, 100}, {EM2874_R5F_TS_ENABLE, 0x44, 0xff, 50}, {EM2874_R5D_TS1_PKT_SIZE, 0x05, 0xff, 50}, {EM2874_R5E_TS2_PKT_SIZE, 0x05, 0xff, 50}, {-1, -1, -1, -1}, }; /* * MyGica USB TV Box * GPIO_1,0: 00=Composite audio * 01=Tuner audio * 10=Mute audio * 11=FM radio? (if equipped) * GPIO_2-6: Unused * GPIO_7: ?? */ static const struct em28xx_reg_seq mygica_utv3_composite_audio_gpio[] = { {EM2820_R08_GPIO_CTRL, 0xfc, 0xff, 0}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq mygica_utv3_tuner_audio_gpio[] = { {EM2820_R08_GPIO_CTRL, 0xfd, 0xff, 0}, { -1, -1, -1, -1}, }; static const struct em28xx_reg_seq mygica_utv3_suspend_gpio[] = { {EM2820_R08_GPIO_CTRL, 0xfe, 0xff, 0}, { -1, -1, -1, -1}, }; /* * Button definitions */ static const struct em28xx_button std_snapshot_button[] = { { .role = EM28XX_BUTTON_SNAPSHOT, .reg_r = EM28XX_R0C_USBSUSP, .reg_clearing = EM28XX_R0C_USBSUSP, .mask = EM28XX_R0C_USBSUSP_SNAPSHOT, .inverted = 0, }, {-1, 0, 0, 0, 0}, }; static const struct em28xx_button speedlink_vad_laplace_buttons[] = { { .role = EM28XX_BUTTON_SNAPSHOT, .reg_r = EM2874_R85_GPIO_P1_STATE, .mask = 0x80, .inverted = 1, }, { .role = EM28XX_BUTTON_ILLUMINATION, .reg_r = EM2874_R84_GPIO_P0_STATE, .mask = 0x08, .inverted = 1, }, {-1, 0, 0, 0, 0}, }; /* * LED definitions */ static struct em28xx_led speedlink_vad_laplace_leds[] = { { .role = EM28XX_LED_ANALOG_CAPTURING, .gpio_reg = EM2874_R80_GPIO_P0_CTRL, .gpio_mask = 0x01, .inverted = 1, }, { .role = EM28XX_LED_ILLUMINATION, .gpio_reg = EM2874_R80_GPIO_P0_CTRL, .gpio_mask = 0x40, .inverted = 1, }, {-1, 0, 0, 0}, }; static struct em28xx_led kworld_ub435q_v3_leds[] = { { .role = EM28XX_LED_DIGITAL_CAPTURING, .gpio_reg = EM2874_R80_GPIO_P0_CTRL, .gpio_mask = 0x80, .inverted = 1, }, {-1, 0, 0, 0}, }; static struct em28xx_led pctv_80e_leds[] = { { .role = EM28XX_LED_DIGITAL_CAPTURING, .gpio_reg = EM2874_R80_GPIO_P0_CTRL, .gpio_mask = 0x80, .inverted = 0, }, {-1, 0, 0, 0}, }; static struct em28xx_led terratec_grabby_leds[] = { { .role = EM28XX_LED_ANALOG_CAPTURING, .gpio_reg = EM2820_R08_GPIO_CTRL, .gpio_mask = EM_GPIO_3, .inverted = 1, }, {-1, 0, 0, 0}, }; static struct em28xx_led hauppauge_dualhd_leds[] = { { .role = EM28XX_LED_DIGITAL_CAPTURING, .gpio_reg = EM2874_R80_GPIO_P0_CTRL, .gpio_mask = EM_GPIO_1, .inverted = 1, }, { .role = EM28XX_LED_DIGITAL_CAPTURING_TS2, .gpio_reg = EM2874_R80_GPIO_P0_CTRL, .gpio_mask = EM_GPIO_3, .inverted = 1, }, {-1, 0, 0, 0}, }; static struct em28xx_led hauppauge_usb_quadhd_leds[] = { { .role = EM28XX_LED_DIGITAL_CAPTURING, .gpio_reg = EM2874_R80_GPIO_P0_CTRL, .gpio_mask = EM_GPIO_2, .inverted = 1, }, { .role = EM28XX_LED_DIGITAL_CAPTURING_TS2, .gpio_reg = EM2874_R80_GPIO_P0_CTRL, .gpio_mask = EM_GPIO_0, .inverted = 1, }, {-1, 0, 0, 0}, }; /* * Board definitions */ const struct em28xx_board em28xx_boards[] = { [EM2750_BOARD_UNKNOWN] = { .name = "EM2710/EM2750/EM2751 webcam grabber", .xclk = EM28XX_XCLK_FREQUENCY_20MHZ, .tuner_type = TUNER_ABSENT, .is_webcam = 1, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = 0, .amux = EM28XX_AMUX_VIDEO, .gpio = silvercrest_reg_seq, } }, }, [EM2800_BOARD_UNKNOWN] = { .name = "Unknown EM2800 video grabber", .is_em2800 = 1, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .tuner_type = TUNER_ABSENT, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_UNKNOWN] = { .name = "Unknown EM2750/28xx video grabber", .tuner_type = TUNER_ABSENT, .is_webcam = 1, /* To enable sensor probe */ }, [EM2882_BOARD_ZOLID_HYBRID_TV_STICK] = { .name = ":ZOLID HYBRID TV STICK", .tuner_type = TUNER_XC2028, .tuner_gpio = zolid_tuner, .decoder = EM28XX_TVP5150, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = zolid_digital, }, [EM2750_BOARD_DLCW_130] = { /* Beijing Huaqi Information Digital Technology Co., Ltd */ .name = "Huaqi DLCW-130", .valid = EM28XX_BOARD_NOT_VALIDATED, .xclk = EM28XX_XCLK_FREQUENCY_48MHZ, .tuner_type = TUNER_ABSENT, .is_webcam = 1, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = 0, .amux = EM28XX_AMUX_VIDEO, } }, }, [EM2820_BOARD_KWORLD_PVRTV2800RF] = { .name = "Kworld PVR TV 2800 RF", .tuner_type = TUNER_TEMIC_PAL, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_GADMEI_TVR200] = { .name = "Gadmei TVR200", .tuner_type = TUNER_LG_PAL_NEW_TAPC, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_TERRATEC_CINERGY_250] = { .name = "Terratec Cinergy 250 USB", .tuner_type = TUNER_LG_PAL_NEW_TAPC, .has_ir_i2c = 1, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_PINNACLE_USB_2] = { .name = "Pinnacle PCTV USB 2", .tuner_type = TUNER_LG_PAL_NEW_TAPC, .has_ir_i2c = 1, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_HAUPPAUGE_WINTV_USB_2] = { .name = "Hauppauge WinTV USB 2", .tuner_type = TUNER_PHILIPS_FM1236_MK3, .tda9887_conf = TDA9887_PRESENT | TDA9887_PORT1_ACTIVE | TDA9887_PORT2_ACTIVE, .decoder = EM28XX_TVP5150, .has_msp34xx = 1, .has_ir_i2c = 1, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = MSP_INPUT_DEFAULT, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = MSP_INPUT(MSP_IN_SCART1, MSP_IN_TUNER1, MSP_DSP_IN_SCART, MSP_DSP_IN_SCART), } }, }, [EM2820_BOARD_DLINK_USB_TV] = { .name = "D-Link DUB-T210 TV Tuner", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_LG_PAL_NEW_TAPC, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_HERCULES_SMART_TV_USB2] = { .name = "Hercules Smart TV USB 2.0", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_LG_PAL_NEW_TAPC, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_PINNACLE_USB_2_FM1216ME] = { .name = "Pinnacle PCTV USB 2 (Philips FM1216ME)", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_PHILIPS_FM1216ME_MK3, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_GADMEI_UTV310] = { .name = "Gadmei UTV310", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_TNF_5335MF, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_LEADTEK_WINFAST_USBII_DELUXE] = { .name = "Leadtek Winfast USB II Deluxe", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_PHILIPS_FM1216ME_MK3, .has_ir_i2c = 1, .tvaudio_addr = 0x58, .tda9887_conf = TDA9887_PRESENT | TDA9887_PORT2_ACTIVE | TDA9887_QSS, .decoder = EM28XX_SAA711X, .adecoder = EM28XX_TVAUDIO, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE4, .amux = EM28XX_AMUX_AUX, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE5, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, .radio = { .type = EM28XX_RADIO, .amux = EM28XX_AMUX_AUX, } }, [EM2820_BOARD_VIDEOLOGY_20K14XUSB] = { .name = "Videology 20K14XUSB USB2.0", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_ABSENT, .is_webcam = 1, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = 0, .amux = EM28XX_AMUX_VIDEO, } }, }, [EM2820_BOARD_SILVERCREST_WEBCAM] = { .name = "Silvercrest Webcam 1.3mpix", .tuner_type = TUNER_ABSENT, .is_webcam = 1, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = 0, .amux = EM28XX_AMUX_VIDEO, .gpio = silvercrest_reg_seq, } }, }, [EM2821_BOARD_SUPERCOMP_USB_2] = { .name = "Supercomp USB 2.0 TV", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_PHILIPS_FM1236_MK3, .tda9887_conf = TDA9887_PRESENT | TDA9887_PORT1_ACTIVE | TDA9887_PORT2_ACTIVE, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2821_BOARD_USBGEAR_VD204] = { .name = "Usbgear VD204v9", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_ABSENT, /* Capture only device */ .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2860_BOARD_NETGMBH_CAM] = { /* Beijing Huaqi Information Digital Technology Co., Ltd */ .name = "NetGMBH Cam", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_ABSENT, .is_webcam = 1, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = 0, .amux = EM28XX_AMUX_VIDEO, } }, }, [EM2860_BOARD_TYPHOON_DVD_MAKER] = { .name = "Typhoon DVD Maker", .decoder = EM28XX_SAA711X, .tuner_type = TUNER_ABSENT, /* Capture only device */ .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2860_BOARD_GADMEI_UTV330] = { .name = "Gadmei UTV330", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_TNF_5335MF, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2861_BOARD_GADMEI_UTV330PLUS] = { .name = "Gadmei UTV330+", .tuner_type = TUNER_TNF_5335MF, .tda9887_conf = TDA9887_PRESENT, .ir_codes = RC_MAP_GADMEI_RM008Z, .decoder = EM28XX_SAA711X, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2860_BOARD_TERRATEC_HYBRID_XS] = { .name = "Terratec Cinergy A Hybrid XS", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2861_BOARD_KWORLD_PVRTV_300U] = { .name = "KWorld PVRTV 300U", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2861_BOARD_YAKUMO_MOVIE_MIXER] = { .name = "Yakumo MovieMixer", .tuner_type = TUNER_ABSENT, /* Capture only device */ .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2860_BOARD_TVP5150_REFERENCE_DESIGN] = { .name = "EM2860/TVP5150 Reference Design", .tuner_type = TUNER_ABSENT, /* Capture only device */ .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2861_BOARD_PLEXTOR_PX_TV100U] = { .name = "Plextor ConvertX PX-TV100U", .tuner_type = TUNER_TNF_5335MF, .xclk = EM28XX_XCLK_I2S_MSB_TIMING | EM28XX_XCLK_FREQUENCY_12MHZ, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_TVP5150, .has_msp34xx = 1, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, .gpio = pinnacle_hybrid_pro_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = pinnacle_hybrid_pro_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = pinnacle_hybrid_pro_analog, } }, }, /* Those boards with em2870 are DVB Only*/ [EM2870_BOARD_TERRATEC_XS] = { .name = "Terratec Cinergy T XS", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, }, [EM2870_BOARD_TERRATEC_XS_MT2060] = { .name = "Terratec Cinergy T XS (MT2060)", .xclk = EM28XX_XCLK_IR_RC5_MODE | EM28XX_XCLK_FREQUENCY_12MHZ, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE, .tuner_type = TUNER_ABSENT, /* MT2060 */ .has_dvb = 1, .tuner_gpio = default_tuner_gpio, }, [EM2870_BOARD_KWORLD_350U] = { .name = "Kworld 350 U DVB-T", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, }, [EM2870_BOARD_KWORLD_355U] = { .name = "Kworld 355 U DVB-T", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_ABSENT, .tuner_gpio = default_tuner_gpio, .has_dvb = 1, .dvb_gpio = default_digital, }, [EM2870_BOARD_PINNACLE_PCTV_DVB] = { .name = "Pinnacle PCTV DVB-T", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_ABSENT, /* MT2060 */ /* djh - I have serious doubts this is right... */ .xclk = EM28XX_XCLK_IR_RC5_MODE | EM28XX_XCLK_FREQUENCY_10MHZ, }, [EM2870_BOARD_COMPRO_VIDEOMATE] = { .name = "Compro, VideoMate U3", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_ABSENT, /* MT2060 */ }, [EM2880_BOARD_TERRATEC_HYBRID_XS_FR] = { .name = "Terratec Hybrid XS Secam", .has_msp34xx = 1, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .has_dvb = 1, .dvb_gpio = terratec_cinergy_USB_XS_FR_digital, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = terratec_cinergy_USB_XS_FR_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = terratec_cinergy_USB_XS_FR_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = terratec_cinergy_USB_XS_FR_analog, } }, }, [EM2884_BOARD_TERRATEC_H5] = { .name = "Terratec Cinergy H5", .has_dvb = 1, #if 0 .tuner_type = TUNER_PHILIPS_TDA8290, .tuner_addr = 0x41, .dvb_gpio = terratec_h5_digital, /* FIXME: probably wrong */ .tuner_gpio = terratec_h5_gpio, #else .tuner_type = TUNER_ABSENT, #endif .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, [EM2884_BOARD_TERRATEC_H6] = { .name = "Terratec Cinergy H6 rev. 2", .has_dvb = 1, .ir_codes = RC_MAP_NEC_TERRATEC_CINERGY_XS, #if 0 .tuner_type = TUNER_PHILIPS_TDA8290, .tuner_addr = 0x41, .dvb_gpio = terratec_h5_digital, /* FIXME: probably wrong */ .tuner_gpio = terratec_h5_gpio, #else .tuner_type = TUNER_ABSENT, #endif .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, [EM2884_BOARD_HAUPPAUGE_WINTV_HVR_930C] = { .name = "Hauppauge WinTV HVR 930C", .has_dvb = 1, #if 0 /* FIXME: Add analog support */ .tuner_type = TUNER_XC5000, .tuner_addr = 0x41, .dvb_gpio = hauppauge_930c_digital, .tuner_gpio = hauppauge_930c_gpio, #else .tuner_type = TUNER_ABSENT, #endif .ir_codes = RC_MAP_HAUPPAUGE, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, [EM2884_BOARD_C3TECH_DIGITAL_DUO] = { .name = "C3 Tech Digital Duo HDTV/SDTV USB", .has_dvb = 1, /* FIXME: Add analog support - need a saa7136 driver */ .tuner_type = TUNER_ABSENT, /* Digital-only TDA18271HD */ .ir_codes = RC_MAP_EMPTY, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE, .dvb_gpio = c3tech_digital_duo_digital, }, [EM2884_BOARD_CINERGY_HTC_STICK] = { .name = "Terratec Cinergy HTC Stick", .has_dvb = 1, .ir_codes = RC_MAP_NEC_TERRATEC_CINERGY_XS, .tuner_type = TUNER_ABSENT, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, [EM2884_BOARD_ELGATO_EYETV_HYBRID_2008] = { .name = "Elgato EyeTV Hybrid 2008 INT", .has_dvb = 1, .ir_codes = RC_MAP_NEC_TERRATEC_CINERGY_XS, .tuner_type = TUNER_ABSENT, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, [EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900] = { .name = "Hauppauge WinTV HVR 900", .tda9887_conf = TDA9887_PRESENT, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = hauppauge_wintv_hvr_900_digital, .ir_codes = RC_MAP_HAUPPAUGE, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900_R2] = { .name = "Hauppauge WinTV HVR 900 (R2)", .tda9887_conf = TDA9887_PRESENT, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = hauppauge_wintv_hvr_900R2_digital, .ir_codes = RC_MAP_HAUPPAUGE, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2883_BOARD_HAUPPAUGE_WINTV_HVR_850] = { .name = "Hauppauge WinTV HVR 850", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = hauppauge_wintv_hvr_900_digital, .ir_codes = RC_MAP_HAUPPAUGE, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2883_BOARD_HAUPPAUGE_WINTV_HVR_950] = { .name = "Hauppauge WinTV HVR 950", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = hauppauge_wintv_hvr_900_digital, .ir_codes = RC_MAP_HAUPPAUGE, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2880_BOARD_PINNACLE_PCTV_HD_PRO] = { .name = "Pinnacle PCTV HD Pro Stick", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = hauppauge_wintv_hvr_900_digital, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2880_BOARD_AMD_ATI_TV_WONDER_HD_600] = { .name = "AMD ATI TV Wonder HD 600", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = hauppauge_wintv_hvr_900_digital, .ir_codes = RC_MAP_ATI_TV_WONDER_HD_600, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2880_BOARD_TERRATEC_HYBRID_XS] = { .name = "Terratec Hybrid XS", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .has_dvb = 1, .dvb_gpio = default_digital, .ir_codes = RC_MAP_TERRATEC_CINERGY_XS, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, /* NEC IR */ .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = default_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = default_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = default_analog, } }, }, /* * maybe there's a reason behind it why Terratec sells the Hybrid XS * as Prodigy XS with a different PID, let's keep it separated for now * maybe we'll need it later on */ [EM2880_BOARD_TERRATEC_PRODIGY_XS] = { .name = "Terratec Prodigy XS", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2820_BOARD_MSI_VOX_USB_2] = { .name = "MSI VOX USB 2.0", .tuner_type = TUNER_LG_PAL_NEW_TAPC, .tda9887_conf = TDA9887_PRESENT | TDA9887_PORT1_ACTIVE | TDA9887_PORT2_ACTIVE, .max_range_640_480 = 1, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE4, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2800_BOARD_TERRATEC_CINERGY_200] = { .name = "Terratec Cinergy 200 USB", .is_em2800 = 1, .has_ir_i2c = 1, .tuner_type = TUNER_LG_TALN, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2800_BOARD_GRABBEEX_USB2800] = { .name = "eMPIA Technology, Inc. GrabBeeX+ Video Encoder", .is_em2800 = 1, .decoder = EM28XX_SAA711X, .tuner_type = TUNER_ABSENT, /* capture only board */ .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2800_BOARD_VC211A] = { .name = "Actionmaster/LinXcel/Digitus VC211A", .is_em2800 = 1, .tuner_type = TUNER_ABSENT, /* Capture-only board */ .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, .gpio = vc211a_enable, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, .gpio = vc211a_enable, } }, }, [EM2800_BOARD_LEADTEK_WINFAST_USBII] = { .name = "Leadtek Winfast USB II", .is_em2800 = 1, .tuner_type = TUNER_LG_PAL_NEW_TAPC, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2800_BOARD_KWORLD_USB2800] = { .name = "Kworld USB2800", .is_em2800 = 1, .tuner_type = TUNER_PHILIPS_FCV1236D, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_PINNACLE_DVC_90] = { .name = "Pinnacle Dazzle DVC 90/100/101/107 / Kaiser Baas Video to DVD maker / Kworld DVD Maker 2 / Plextor ConvertX PX-AV100U", .tuner_type = TUNER_ABSENT, /* capture only board */ .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2800_BOARD_VGEAR_POCKETTV] = { .name = "V-Gear PocketTV", .is_em2800 = 1, .tuner_type = TUNER_LG_PAL_NEW_TAPC, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_PROLINK_PLAYTV_BOX4_USB2] = { .name = "Pixelview PlayTV Box 4 USB 2.0", .tda9887_conf = TDA9887_PRESENT, .tuner_type = TUNER_YMEC_TVF_5533MF, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, .aout = EM28XX_AOUT_MONO | /* I2S */ EM28XX_AOUT_MASTER, /* Line out pin */ }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_PROLINK_PLAYTV_USB2] = { .name = "SIIG AVTuner-PVR / Pixelview Prolink PlayTV USB 2.0", .buttons = std_snapshot_button, .tda9887_conf = TDA9887_PRESENT, .tuner_type = TUNER_YMEC_TVF_5533MF, .tuner_addr = 0x60, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, .aout = EM28XX_AOUT_MONO | /* I2S */ EM28XX_AOUT_MASTER, /* Line out pin */ }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2860_BOARD_SAA711X_REFERENCE_DESIGN] = { .name = "EM2860/SAA711X Reference Design", .buttons = std_snapshot_button, .tuner_type = TUNER_ABSENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, } }, }, [EM2874_BOARD_LEADERSHIP_ISDBT] = { .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_100_KHZ, .xclk = EM28XX_XCLK_FREQUENCY_10MHZ, .name = "EM2874 Leadership ISDBT", .tuner_type = TUNER_ABSENT, .tuner_gpio = leadership_reset, .dvb_gpio = leadership_digital, .has_dvb = 1, }, [EM2880_BOARD_MSI_DIGIVOX_AD] = { .name = "MSI DigiVox A/D", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = em2880_msi_digivox_ad_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = em2880_msi_digivox_ad_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = em2880_msi_digivox_ad_analog, } }, }, [EM2880_BOARD_MSI_DIGIVOX_AD_II] = { .name = "MSI DigiVox A/D II", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = em2880_msi_digivox_ad_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = em2880_msi_digivox_ad_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = em2880_msi_digivox_ad_analog, } }, }, [EM2880_BOARD_KWORLD_DVB_305U] = { .name = "KWorld DVB-T 305U", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2880_BOARD_KWORLD_DVB_310U] = { .name = "KWorld DVB-T 310U", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .has_dvb = 1, .dvb_gpio = default_digital, .mts_firmware = 1, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = default_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = default_analog, }, { /* S-video has not been tested yet */ .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = default_analog, } }, }, [EM2882_BOARD_KWORLD_ATSC_315U] = { .name = "KWorld ATSC 315U HDTV TV Box", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_THOMSON_DTT761X, .tuner_gpio = em2882_kworld_315u_tuner_gpio, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_SAA711X, .has_dvb = 1, .dvb_gpio = em2882_kworld_315u_digital, .ir_codes = RC_MAP_KWORLD_315U, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE, #if 0 /* FIXME: Analog mode - still not ready */ .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, .gpio = em2882_kworld_315u_analog, .aout = EM28XX_AOUT_PCM_IN | EM28XX_AOUT_PCM_STEREO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, .gpio = em2882_kworld_315u_analog1, .aout = EM28XX_AOUT_PCM_IN | EM28XX_AOUT_PCM_STEREO, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, .gpio = em2882_kworld_315u_analog1, .aout = EM28XX_AOUT_PCM_IN | EM28XX_AOUT_PCM_STEREO, } }, #endif }, [EM2880_BOARD_EMPIRE_DUAL_TV] = { .name = "Empire dual TV", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .has_dvb = 1, .dvb_gpio = default_digital, .mts_firmware = 1, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = default_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = default_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = default_analog, } }, }, [EM2881_BOARD_DNT_DA2_HYBRID] = { .name = "DNT DA2 Hybrid", .valid = EM28XX_BOARD_NOT_VALIDATED, .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = default_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = default_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = default_analog, } }, }, [EM2881_BOARD_PINNACLE_HYBRID_PRO] = { .name = "Pinnacle Hybrid Pro", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .has_dvb = 1, .dvb_gpio = pinnacle_hybrid_pro_digital, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = pinnacle_hybrid_pro_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = pinnacle_hybrid_pro_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = pinnacle_hybrid_pro_analog, } }, }, [EM2882_BOARD_PINNACLE_HYBRID_PRO_330E] = { .name = "Pinnacle Hybrid Pro (330e)", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = hauppauge_wintv_hvr_900R2_digital, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2882_BOARD_KWORLD_VS_DVBT] = { .name = "Kworld VS-DVB-T 323UR", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = kworld_330u_digital, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, /* NEC IR */ .ir_codes = RC_MAP_KWORLD_315U, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2882_BOARD_TERRATEC_HYBRID_XS] = { .name = "Terratec Cinergy Hybrid T USB XS (em2882)", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .mts_firmware = 1, .decoder = EM28XX_TVP5150, .has_dvb = 1, .dvb_gpio = hauppauge_wintv_hvr_900_digital, .ir_codes = RC_MAP_TERRATEC_CINERGY_XS, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = hauppauge_wintv_hvr_900_analog, } }, }, [EM2882_BOARD_DIKOM_DK300] = { .name = "Dikom DK300", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = dikom_dk300_digital, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = default_analog, } }, }, [EM2883_BOARD_KWORLD_HYBRID_330U] = { .name = "Kworld PlusTV HD Hybrid 330", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = kworld_330u_digital, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_EEPROM_ON_BOARD | EM28XX_I2C_EEPROM_KEY_VALID, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = kworld_330u_analog, .aout = EM28XX_AOUT_PCM_IN | EM28XX_AOUT_PCM_STEREO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = kworld_330u_analog, .aout = EM28XX_AOUT_PCM_IN | EM28XX_AOUT_PCM_STEREO, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = kworld_330u_analog, } }, }, [EM2820_BOARD_COMPRO_VIDEOMATE_FORYOU] = { .name = "Compro VideoMate ForYou/Stereo", .tuner_type = TUNER_LG_PAL_NEW_TAPC, .tvaudio_addr = 0xb0, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_TVP5150, .adecoder = EM28XX_TVAUDIO, .mute_gpio = compro_mute_gpio, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = compro_unmute_tv_gpio, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = compro_unmute_svid_gpio, } }, }, [EM2860_BOARD_KAIOMY_TVNPC_U2] = { .name = "Kaiomy TVnPC U2", .vchannels = 3, .tuner_type = TUNER_XC2028, .tuner_addr = 0x61, .mts_firmware = 1, .decoder = EM28XX_TVP5150, .tuner_gpio = default_tuner_gpio, .ir_codes = RC_MAP_KAIOMY, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, } }, .radio = { .type = EM28XX_RADIO, .amux = EM28XX_AMUX_LINE_IN, } }, [EM2860_BOARD_EASYCAP] = { .name = "Easy Cap Capture DC-60", .vchannels = 2, .tuner_type = TUNER_ABSENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2820_BOARD_IODATA_GVMVP_SZ] = { .name = "IO-DATA GV-MVP/SZ", .tuner_type = TUNER_PHILIPS_FM1236_MK3, .tuner_gpio = default_tuner_gpio, .tda9887_conf = TDA9887_PRESENT, .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, }, { /* Composite has not been tested yet */ .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_VIDEO, }, { /* S-video has not been tested yet */ .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_VIDEO, } }, }, [EM2860_BOARD_TERRATEC_GRABBY] = { .name = "Terratec Grabby", .vchannels = 2, .tuner_type = TUNER_ABSENT, .decoder = EM28XX_SAA711X, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, .buttons = std_snapshot_button, .leds = terratec_grabby_leds, }, [EM2860_BOARD_TERRATEC_AV350] = { .name = "Terratec AV350", .vchannels = 2, .tuner_type = TUNER_ABSENT, .decoder = EM28XX_TVP5150, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, .mute_gpio = terratec_av350_mute_gpio, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = terratec_av350_unmute_gpio, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = terratec_av350_unmute_gpio, } }, }, [EM2860_BOARD_ELGATO_VIDEO_CAPTURE] = { .name = "Elgato Video Capture", .decoder = EM28XX_SAA711X, .tuner_type = TUNER_ABSENT, /* Capture only device */ .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, [EM2882_BOARD_EVGA_INDTUBE] = { .name = "Evga inDtube", .tuner_type = TUNER_XC2028, .tuner_gpio = default_tuner_gpio, .decoder = EM28XX_TVP5150, .xclk = EM28XX_XCLK_FREQUENCY_12MHZ, /* NEC IR */ .mts_firmware = 1, .has_dvb = 1, .dvb_gpio = evga_indtube_digital, .ir_codes = RC_MAP_EVGA_INDTUBE, .input = { { .type = EM28XX_VMUX_TELEVISION, .vmux = TVP5150_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = evga_indtube_analog, }, { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, .gpio = evga_indtube_analog, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, .gpio = evga_indtube_analog, } }, }, /* * eb1a:2868 Empia EM2870 + Philips CU1216L NIM * (Philips TDA10023 + Infineon TUA6034) */ [EM2870_BOARD_REDDO_DVB_C_USB_BOX] = { .name = "Reddo DVB-C USB TV Box", .tuner_type = TUNER_ABSENT, .tuner_gpio = reddo_dvb_c_usb_box, .has_dvb = 1, }, /* * 1b80:a340 - Empia EM2870, NXP TDA18271HD and LG DT3304, sold * initially as the KWorld PlusTV 340U, then as the UB435-Q. * Early variants have a TDA18271HD/C1, later ones a TDA18271HD/C2 */ [EM2870_BOARD_KWORLD_A340] = { .name = "KWorld PlusTV 340U or UB435-Q (ATSC)", .tuner_type = TUNER_ABSENT, /* Digital-only TDA18271HD */ .has_dvb = 1, .dvb_gpio = kworld_a340_digital, .tuner_gpio = default_tuner_gpio, }, /* * 2013:024f PCTV nanoStick T2 290e. * Empia EM28174, Sony CXD2820R and NXP TDA18271HD/C2 */ [EM28174_BOARD_PCTV_290E] = { .name = "PCTV nanoStick T2 290e", .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_100_KHZ, .tuner_type = TUNER_ABSENT, .tuner_gpio = pctv_290e, .has_dvb = 1, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, }, /* * 2013:024f PCTV DVB-S2 Stick 460e * Empia EM28174, NXP TDA10071, Conexant CX24118A and Allegro A8293 */ [EM28174_BOARD_PCTV_460E] = { .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, .name = "PCTV DVB-S2 Stick (460e)", .tuner_type = TUNER_ABSENT, .tuner_gpio = pctv_460e, .has_dvb = 1, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, }, /* * eb1a:5006 Honestech VIDBOX NW03 * Empia EM2860, Philips SAA7113, Empia EMP202, No Tuner */ [EM2860_BOARD_HT_VIDBOX_NW03] = { .name = "Honestech Vidbox NW03", .tuner_type = TUNER_ABSENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, /* S-VIDEO needs check */ .amux = EM28XX_AMUX_LINE_IN, } }, }, /* * 1b80:e425 MaxMedia UB425-TC * Empia EM2874B + Micronas DRX 3913KA2 + NXP TDA18271HDC2 */ [EM2874_BOARD_MAXMEDIA_UB425_TC] = { .name = "MaxMedia UB425-TC", .tuner_type = TUNER_ABSENT, .tuner_gpio = maxmedia_ub425_tc, .has_dvb = 1, .ir_codes = RC_MAP_REDDO, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, /* * 2304:0242 PCTV QuatroStick (510e) * Empia EM2884 + Micronas DRX 3926K + NXP TDA18271HDC2 */ [EM2884_BOARD_PCTV_510E] = { .name = "PCTV QuatroStick (510e)", .tuner_type = TUNER_ABSENT, .tuner_gpio = pctv_510e, .has_dvb = 1, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, /* * 2013:0251 PCTV QuatroStick nano (520e) * Empia EM2884 + Micronas DRX 3926K + NXP TDA18271HDC2 */ [EM2884_BOARD_PCTV_520E] = { .name = "PCTV QuatroStick nano (520e)", .tuner_type = TUNER_ABSENT, .tuner_gpio = pctv_520e, .has_dvb = 1, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, [EM2884_BOARD_TERRATEC_HTC_USB_XS] = { .name = "Terratec Cinergy HTC USB XS", .has_dvb = 1, .ir_codes = RC_MAP_NEC_TERRATEC_CINERGY_XS, .tuner_type = TUNER_ABSENT, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, /* * 1b80:e1cc Delock 61959 * Empia EM2874B + Micronas DRX 3913KA2 + NXP TDA18271HDC2 * mostly the same as MaxMedia UB-425-TC but different remote */ [EM2874_BOARD_DELOCK_61959] = { .name = "Delock 61959", .tuner_type = TUNER_ABSENT, .tuner_gpio = maxmedia_ub425_tc, .has_dvb = 1, .ir_codes = RC_MAP_DELOCK_61959, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, }, /* * 1b80:e346 KWorld USB ATSC TV Stick UB435-Q V2 * Empia EM2874B + LG DT3305 + NXP TDA18271HDC2 */ [EM2874_BOARD_KWORLD_UB435Q_V2] = { .name = "KWorld USB ATSC TV Stick UB435-Q V2", .tuner_type = TUNER_ABSENT, .has_dvb = 1, .dvb_gpio = kworld_a340_digital, .tuner_gpio = default_tuner_gpio, .def_i2c_bus = 1, }, /* * 1b80:e34c KWorld USB ATSC TV Stick UB435-Q V3 * Empia EM2874B + LG DT3305 + NXP TDA18271HDC2 */ [EM2874_BOARD_KWORLD_UB435Q_V3] = { .name = "KWorld USB ATSC TV Stick UB435-Q V3", .tuner_type = TUNER_ABSENT, .has_dvb = 1, .tuner_gpio = kworld_ub435q_v3_digital, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_100_KHZ, .leds = kworld_ub435q_v3_leds, }, [EM2874_BOARD_PCTV_HD_MINI_80E] = { .name = "Pinnacle PCTV HD Mini", .tuner_type = TUNER_ABSENT, .has_dvb = 1, .dvb_gpio = em2874_pctv_80e_digital, .decoder = EM28XX_NODECODER, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, .leds = pctv_80e_leds, }, /* * 1ae7:9003/9004 SpeedLink Vicious And Devine Laplace webcam * Empia EM2765 + OmniVision OV2640 */ [EM2765_BOARD_SPEEDLINK_VAD_LAPLACE] = { .name = "SpeedLink Vicious And Devine Laplace webcam", .xclk = EM28XX_XCLK_FREQUENCY_24MHZ, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_100_KHZ, .def_i2c_bus = 1, .tuner_type = TUNER_ABSENT, .is_webcam = 1, .input = { { .type = EM28XX_VMUX_COMPOSITE, .amux = EM28XX_AMUX_VIDEO, .gpio = speedlink_vad_laplace_reg_seq, } }, .buttons = speedlink_vad_laplace_buttons, .leds = speedlink_vad_laplace_leds, }, /* * 2013:0258 PCTV DVB-S2 Stick (461e) * Empia EM28178, Montage M88DS3103, Montage M88TS2022, Allegro A8293 */ [EM28178_BOARD_PCTV_461E] = { .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, .name = "PCTV DVB-S2 Stick (461e)", .tuner_type = TUNER_ABSENT, .tuner_gpio = pctv_461e, .has_dvb = 1, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, }, /* * 2013:0259 PCTV DVB-S2 Stick (461e_v2) * Empia EM28178, Montage M88DS3103b, Montage M88TS2022, Allegro A8293 */ [EM28178_BOARD_PCTV_461E_V2] = { .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, .name = "PCTV DVB-S2 Stick (461e v2)", .tuner_type = TUNER_ABSENT, .tuner_gpio = pctv_461e, .has_dvb = 1, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, }, /* * 2013:025f PCTV tripleStick (292e). * Empia EM28178, Silicon Labs Si2168, Silicon Labs Si2157 */ [EM28178_BOARD_PCTV_292E] = { .name = "PCTV tripleStick (292e)", .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, .tuner_type = TUNER_ABSENT, .tuner_gpio = pctv_292e, .has_dvb = 1, .ir_codes = RC_MAP_PINNACLE_PCTV_HD, }, [EM2861_BOARD_LEADTEK_VC100] = { .name = "Leadtek VC100", .tuner_type = TUNER_ABSENT, /* Capture only device */ .decoder = EM28XX_TVP5150, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = TVP5150_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, } }, }, /* * eb1a:8179 Terratec Cinergy T2 Stick HD. * Empia EM28178, Silicon Labs Si2168, Silicon Labs Si2146 */ [EM28178_BOARD_TERRATEC_T2_STICK_HD] = { .name = "Terratec Cinergy T2 Stick HD", .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, .tuner_type = TUNER_ABSENT, .tuner_gpio = terratec_t2_stick_hd, .has_dvb = 1, .ir_codes = RC_MAP_TERRATEC_SLIM_2, }, /* * 3275:0085 PLEX PX-BCUD. * Empia EM28178, TOSHIBA TC90532XBG, Sharp QM1D1C0042 */ [EM28178_BOARD_PLEX_PX_BCUD] = { .name = "PLEX PX-BCUD", .xclk = EM28XX_XCLK_FREQUENCY_4_3MHZ, .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE, .tuner_type = TUNER_ABSENT, .tuner_gpio = plex_px_bcud, .has_dvb = 1, }, /* * 2040:0265 Hauppauge WinTV-dualHD (DVB version) Isoc. * 2040:8265 Hauppauge WinTV-dualHD (DVB version) Bulk. * Empia EM28274, 2x Silicon Labs Si2168, 2x Silicon Labs Si2157 */ [EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB] = { .name = "Hauppauge WinTV-dualHD DVB", .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, .tuner_type = TUNER_SI2157, .tuner_gpio = hauppauge_dualhd_dvb, .has_dvb = 1, .has_dual_ts = 1, .ir_codes = RC_MAP_HAUPPAUGE, .leds = hauppauge_dualhd_leds, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = TVP5150_COMPOSITE1, .amux = EM28XX_AMUX_LINE_IN, } }, }, /* * 2040:026d Hauppauge WinTV-dualHD (model 01595 - ATSC/QAM) Isoc. * 2040:826d Hauppauge WinTV-dualHD (model 01595 - ATSC/QAM) Bulk. * Empia EM28274, 2x LG LGDT3306A, 2x Silicon Labs Si2157 */ [EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_01595] = { .name = "Hauppauge WinTV-dualHD 01595 ATSC/QAM", .def_i2c_bus = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_400_KHZ, .tuner_type = TUNER_ABSENT, .tuner_gpio = hauppauge_dualhd_dvb, .has_dvb = 1, .has_dual_ts = 1, .ir_codes = RC_MAP_HAUPPAUGE, .leds = hauppauge_dualhd_leds, }, /* * 1b80:e349 Magix USB Videowandler-2 * (same chips as Honestech VIDBOX NW03) * Empia EM2860, Philips SAA7113, Empia EMP202, No Tuner */ [EM2861_BOARD_MAGIX_VIDEOWANDLER2] = { .name = "Magix USB Videowandler-2", .tuner_type = TUNER_ABSENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .amux = EM28XX_AMUX_LINE_IN, } }, }, /* * 1f4d:1abe MyGica iGrabber * (same as several other EM2860 devices) * Empia EM2860, Philips SAA7113, Empia EMP202, No Tuner */ [EM2860_BOARD_MYGICA_IGRABBER] = { .name = "MyGica iGrabber", .vchannels = 2, .tuner_type = TUNER_ABSENT, .decoder = EM28XX_SAA711X, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_LINE_IN, }, { .type = EM28XX_VMUX_SVIDEO, .vmux = SAA7115_SVIDEO3, .amux = EM28XX_AMUX_LINE_IN, } }, }, /* 2040:826d Hauppauge USB QuadHD * Empia 28274, Max Linear 692 ATSC combo demod/tuner */ [EM2874_BOARD_HAUPPAUGE_USB_QUADHD] = { .name = "Hauppauge USB QuadHD ATSC", .def_i2c_bus = 1, .has_dual_ts = 1, .has_dvb = 1, .i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_100_KHZ, .tuner_type = TUNER_ABSENT, .tuner_gpio = hauppauge_usb_quadhd_atsc_reg_seq, .leds = hauppauge_usb_quadhd_leds, }, /* * eb1a:2860 MyGica UTV3 Analog USB2.0 TV Box * Empia EM2860, Philips SAA7113, NXP TDA9801T demod, * Tena TNF931D-DFDR1 tuner (contains NXP TDA6509A), * ST HCF4052 demux (switches audio to line out), * no audio over USB */ [EM2860_BOARD_MYGICA_UTV3] = { .name = "MyGica UTV3 Analog USB2.0 TV Box", .xclk = EM28XX_XCLK_IR_RC5_MODE | EM28XX_XCLK_FREQUENCY_12MHZ, .tuner_type = TUNER_TENA_TNF_931D_DFDR1, .ir_codes = RC_MAP_MYGICA_UTV3, .decoder = EM28XX_SAA711X, .suspend_gpio = mygica_utv3_suspend_gpio, .input = { { .type = EM28XX_VMUX_COMPOSITE, .vmux = SAA7115_COMPOSITE0, .amux = EM28XX_AMUX_VIDEO, .gpio = mygica_utv3_composite_audio_gpio, }, { .type = EM28XX_VMUX_TELEVISION, .vmux = SAA7115_COMPOSITE2, .amux = EM28XX_AMUX_VIDEO, .gpio = mygica_utv3_tuner_audio_gpio, } }, }, }; EXPORT_SYMBOL_GPL(em28xx_boards); static const unsigned int em28xx_bcount = ARRAY_SIZE(em28xx_boards); /* table of devices that work with this driver */ struct usb_device_id em28xx_id_table[] = { { USB_DEVICE(0xeb1a, 0x2750), .driver_info = EM2750_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2751), .driver_info = EM2750_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2800), .driver_info = EM2800_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2710), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2820), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2821), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2860), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2861), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2862), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2863), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2870), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2881), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2883), /* used by :Zolid Hybrid Tv Stick */ .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2868), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2875), .driver_info = EM2820_BOARD_UNKNOWN }, { USB_DEVICE(0xeb1a, 0x2885), /* MSI Digivox Trio */ .driver_info = EM2884_BOARD_TERRATEC_H5 }, { USB_DEVICE(0xeb1a, 0xe300), .driver_info = EM2861_BOARD_KWORLD_PVRTV_300U }, { USB_DEVICE(0xeb1a, 0xe303), .driver_info = EM2860_BOARD_KAIOMY_TVNPC_U2 }, { USB_DEVICE(0xeb1a, 0xe305), .driver_info = EM2880_BOARD_KWORLD_DVB_305U }, { USB_DEVICE(0xeb1a, 0xe310), .driver_info = EM2880_BOARD_MSI_DIGIVOX_AD }, { USB_DEVICE(0xeb1a, 0xa313), .driver_info = EM2882_BOARD_KWORLD_ATSC_315U }, { USB_DEVICE(0xeb1a, 0xa316), .driver_info = EM2883_BOARD_KWORLD_HYBRID_330U }, { USB_DEVICE(0xeb1a, 0xe320), .driver_info = EM2880_BOARD_MSI_DIGIVOX_AD_II }, { USB_DEVICE(0xeb1a, 0xe323), .driver_info = EM2882_BOARD_KWORLD_VS_DVBT }, { USB_DEVICE(0xeb1a, 0xe350), .driver_info = EM2870_BOARD_KWORLD_350U }, { USB_DEVICE(0xeb1a, 0xe355), .driver_info = EM2870_BOARD_KWORLD_355U }, { USB_DEVICE(0xeb1a, 0x2801), .driver_info = EM2800_BOARD_GRABBEEX_USB2800 }, { USB_DEVICE(0xeb1a, 0xe357), .driver_info = EM2870_BOARD_KWORLD_355U }, { USB_DEVICE(0xeb1a, 0xe359), .driver_info = EM2870_BOARD_KWORLD_355U }, { USB_DEVICE(0x1b80, 0xe302), /* Kaiser Baas Video to DVD maker */ .driver_info = EM2820_BOARD_PINNACLE_DVC_90 }, { USB_DEVICE(0x1b80, 0xe304), /* Kworld DVD Maker 2 */ .driver_info = EM2820_BOARD_PINNACLE_DVC_90 }, { USB_DEVICE(0x0ccd, 0x0036), .driver_info = EM2820_BOARD_TERRATEC_CINERGY_250 }, { USB_DEVICE(0x0ccd, 0x004c), .driver_info = EM2880_BOARD_TERRATEC_HYBRID_XS_FR }, { USB_DEVICE(0x0ccd, 0x004f), .driver_info = EM2860_BOARD_TERRATEC_HYBRID_XS }, { USB_DEVICE(0x0ccd, 0x005e), .driver_info = EM2882_BOARD_TERRATEC_HYBRID_XS }, { USB_DEVICE(0x0ccd, 0x0042), .driver_info = EM2882_BOARD_TERRATEC_HYBRID_XS }, { USB_DEVICE(0x0ccd, 0x0043), .driver_info = EM2870_BOARD_TERRATEC_XS_MT2060 }, { USB_DEVICE(0x0ccd, 0x008e), /* Cinergy HTC USB XS Rev. 1 */ .driver_info = EM2884_BOARD_TERRATEC_HTC_USB_XS }, { USB_DEVICE(0x0ccd, 0x00ac), /* Cinergy HTC USB XS Rev. 2 */ .driver_info = EM2884_BOARD_TERRATEC_HTC_USB_XS }, { USB_DEVICE(0x0ccd, 0x10a2), /* H5 Rev. 1 */ .driver_info = EM2884_BOARD_TERRATEC_H5 }, { USB_DEVICE(0x0ccd, 0x10ad), /* H5 Rev. 2 */ .driver_info = EM2884_BOARD_TERRATEC_H5 }, { USB_DEVICE(0x0ccd, 0x10b6), /* H5 Rev. 3 */ .driver_info = EM2884_BOARD_TERRATEC_H5 }, { USB_DEVICE(0x0ccd, 0x10b2), /* H6 */ .driver_info = EM2884_BOARD_TERRATEC_H6 }, { USB_DEVICE(0x0ccd, 0x0084), .driver_info = EM2860_BOARD_TERRATEC_AV350 }, { USB_DEVICE(0x0ccd, 0x0096), .driver_info = EM2860_BOARD_TERRATEC_GRABBY }, { USB_DEVICE(0x0ccd, 0x10AF), .driver_info = EM2860_BOARD_TERRATEC_GRABBY }, { USB_DEVICE(0x0ccd, 0x00b2), .driver_info = EM2884_BOARD_CINERGY_HTC_STICK }, { USB_DEVICE(0x0fd9, 0x0018), .driver_info = EM2884_BOARD_ELGATO_EYETV_HYBRID_2008 }, { USB_DEVICE(0x0fd9, 0x0033), .driver_info = EM2860_BOARD_ELGATO_VIDEO_CAPTURE }, { USB_DEVICE(0x185b, 0x2870), .driver_info = EM2870_BOARD_COMPRO_VIDEOMATE }, { USB_DEVICE(0x185b, 0x2041), .driver_info = EM2820_BOARD_COMPRO_VIDEOMATE_FORYOU }, { USB_DEVICE(0x2040, 0x4200), .driver_info = EM2820_BOARD_HAUPPAUGE_WINTV_USB_2 }, { USB_DEVICE(0x2040, 0x4201), .driver_info = EM2820_BOARD_HAUPPAUGE_WINTV_USB_2 }, { USB_DEVICE(0x2040, 0x6500), .driver_info = EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900 }, { USB_DEVICE(0x2040, 0x6502), .driver_info = EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900_R2 }, { USB_DEVICE(0x2040, 0x6513), /* HCW HVR-980 */ .driver_info = EM2883_BOARD_HAUPPAUGE_WINTV_HVR_950 }, { USB_DEVICE(0x2040, 0x6517), /* HP HVR-950 */ .driver_info = EM2883_BOARD_HAUPPAUGE_WINTV_HVR_950 }, { USB_DEVICE(0x2040, 0x651b), /* RP HVR-950 */ .driver_info = EM2883_BOARD_HAUPPAUGE_WINTV_HVR_950 }, { USB_DEVICE(0x2040, 0x651f), .driver_info = EM2883_BOARD_HAUPPAUGE_WINTV_HVR_850 }, { USB_DEVICE(0x2040, 0x0265), .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB }, { USB_DEVICE(0x2040, 0x8265), .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB }, { USB_DEVICE(0x2040, 0x026d), .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_01595 }, { USB_DEVICE(0x2040, 0x826d), .driver_info = EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_01595 }, { USB_DEVICE(0x2040, 0x846d), .driver_info = EM2874_BOARD_HAUPPAUGE_USB_QUADHD }, { USB_DEVICE(0x0438, 0xb002), .driver_info = EM2880_BOARD_AMD_ATI_TV_WONDER_HD_600 }, { USB_DEVICE(0x2001, 0xf112), .driver_info = EM2820_BOARD_DLINK_USB_TV }, { USB_DEVICE(0x2304, 0x0207), .driver_info = EM2820_BOARD_PINNACLE_DVC_90 }, { USB_DEVICE(0x2304, 0x0208), .driver_info = EM2820_BOARD_PINNACLE_USB_2 }, { USB_DEVICE(0x2304, 0x021a), .driver_info = EM2820_BOARD_PINNACLE_DVC_90 }, { USB_DEVICE(0x2304, 0x0226), .driver_info = EM2882_BOARD_PINNACLE_HYBRID_PRO_330E }, { USB_DEVICE(0x2304, 0x0227), .driver_info = EM2880_BOARD_PINNACLE_PCTV_HD_PRO }, { USB_DEVICE(0x2304, 0x023f), .driver_info = EM2874_BOARD_PCTV_HD_MINI_80E }, { USB_DEVICE(0x0413, 0x6023), .driver_info = EM2800_BOARD_LEADTEK_WINFAST_USBII }, { USB_DEVICE(0x093b, 0xa003), .driver_info = EM2820_BOARD_PINNACLE_DVC_90 }, { USB_DEVICE(0x093b, 0xa005), .driver_info = EM2861_BOARD_PLEXTOR_PX_TV100U }, { USB_DEVICE(0x04bb, 0x0515), .driver_info = EM2820_BOARD_IODATA_GVMVP_SZ }, { USB_DEVICE(0xeb1a, 0x50a6), .driver_info = EM2860_BOARD_GADMEI_UTV330 }, { USB_DEVICE(0x1b80, 0xa340), .driver_info = EM2870_BOARD_KWORLD_A340 }, { USB_DEVICE(0x1b80, 0xe346), .driver_info = EM2874_BOARD_KWORLD_UB435Q_V2 }, { USB_DEVICE(0x1b80, 0xe34c), .driver_info = EM2874_BOARD_KWORLD_UB435Q_V3 }, { USB_DEVICE(0x2013, 0x024f), .driver_info = EM28174_BOARD_PCTV_290E }, { USB_DEVICE(0x2013, 0x024c), .driver_info = EM28174_BOARD_PCTV_460E }, { USB_DEVICE(0x2040, 0x1605), .driver_info = EM2884_BOARD_HAUPPAUGE_WINTV_HVR_930C }, { USB_DEVICE(0x1b80, 0xe755), .driver_info = EM2884_BOARD_C3TECH_DIGITAL_DUO }, { USB_DEVICE(0xeb1a, 0x5006), .driver_info = EM2860_BOARD_HT_VIDBOX_NW03 }, { USB_DEVICE(0x1b80, 0xe309), /* Sveon STV40 */ .driver_info = EM2860_BOARD_EASYCAP }, { USB_DEVICE(0x1b80, 0xe425), .driver_info = EM2874_BOARD_MAXMEDIA_UB425_TC }, { USB_DEVICE(0x1f4d, 0x1abe), .driver_info = EM2860_BOARD_MYGICA_IGRABBER }, { USB_DEVICE(0x2304, 0x0242), .driver_info = EM2884_BOARD_PCTV_510E }, { USB_DEVICE(0x2013, 0x0251), .driver_info = EM2884_BOARD_PCTV_520E }, { USB_DEVICE(0x1b80, 0xe1cc), .driver_info = EM2874_BOARD_DELOCK_61959 }, { USB_DEVICE(0x1ae7, 0x9003), .driver_info = EM2765_BOARD_SPEEDLINK_VAD_LAPLACE }, { USB_DEVICE(0x1ae7, 0x9004), .driver_info = EM2765_BOARD_SPEEDLINK_VAD_LAPLACE }, { USB_DEVICE(0x2013, 0x0258), .driver_info = EM28178_BOARD_PCTV_461E }, { USB_DEVICE(0x2013, 0x8258), /* Bulk transport 461e */ .driver_info = EM28178_BOARD_PCTV_461E }, { USB_DEVICE(0x2013, 0x0461), .driver_info = EM28178_BOARD_PCTV_461E_V2 }, { USB_DEVICE(0x2013, 0x8461), /* Bulk transport 461e v2 */ .driver_info = EM28178_BOARD_PCTV_461E_V2 }, { USB_DEVICE(0x2013, 0x0259), .driver_info = EM28178_BOARD_PCTV_461E_V2 }, { USB_DEVICE(0x2013, 0x025f), .driver_info = EM28178_BOARD_PCTV_292E }, { USB_DEVICE(0x2013, 0x0264), /* Hauppauge WinTV-soloHD 292e SE */ .driver_info = EM28178_BOARD_PCTV_292E }, { USB_DEVICE(0x2040, 0x0264), /* Hauppauge WinTV-soloHD Isoc */ .driver_info = EM28178_BOARD_PCTV_292E }, { USB_DEVICE(0x2040, 0x8264), /* Hauppauge OEM Generic WinTV-soloHD Bulk */ .driver_info = EM28178_BOARD_PCTV_292E }, { USB_DEVICE(0x2040, 0x8268), /* Hauppauge Retail WinTV-soloHD Bulk */ .driver_info = EM28178_BOARD_PCTV_292E }, { USB_DEVICE(0x0413, 0x6f07), .driver_info = EM2861_BOARD_LEADTEK_VC100 }, { USB_DEVICE(0xeb1a, 0x8179), .driver_info = EM28178_BOARD_TERRATEC_T2_STICK_HD }, { USB_DEVICE(0x3275, 0x0085), .driver_info = EM28178_BOARD_PLEX_PX_BCUD }, { USB_DEVICE(0xeb1a, 0x5051), /* Ion Video 2 PC MKII / Startech svid2usb23 / Raygo R12-41373 */ .driver_info = EM2860_BOARD_TVP5150_REFERENCE_DESIGN }, { USB_DEVICE(0x1b80, 0xe349), /* Magix USB Videowandler-2 */ .driver_info = EM2861_BOARD_MAGIX_VIDEOWANDLER2 }, { }, }; MODULE_DEVICE_TABLE(usb, em28xx_id_table); /* * EEPROM hash table for devices with generic USB IDs */ static const struct em28xx_hash_table em28xx_eeprom_hash[] = { /* P/N: SA 60002070465 Tuner: TVF7533-MF */ {0x6ce05a8f, EM2820_BOARD_PROLINK_PLAYTV_USB2, TUNER_YMEC_TVF_5533MF}, {0x72cc5a8b, EM2820_BOARD_PROLINK_PLAYTV_BOX4_USB2, TUNER_YMEC_TVF_5533MF}, {0x966a0441, EM2880_BOARD_KWORLD_DVB_310U, TUNER_XC2028}, {0x166a0441, EM2880_BOARD_EMPIRE_DUAL_TV, TUNER_XC2028}, {0xcee44a99, EM2882_BOARD_EVGA_INDTUBE, TUNER_XC2028}, {0xb8846b20, EM2881_BOARD_PINNACLE_HYBRID_PRO, TUNER_XC2028}, {0x63f653bd, EM2870_BOARD_REDDO_DVB_C_USB_BOX, TUNER_ABSENT}, {0x4e913442, EM2882_BOARD_DIKOM_DK300, TUNER_XC2028}, {0x85dd871e, EM2882_BOARD_ZOLID_HYBRID_TV_STICK, TUNER_XC2028}, {0x8f597549, EM2860_BOARD_MYGICA_UTV3, TUNER_TENA_TNF_931D_DFDR1}, }; /* I2C devicelist hash table for devices with generic USB IDs */ static const struct em28xx_hash_table em28xx_i2c_hash[] = { {0xb06a32c3, EM2800_BOARD_TERRATEC_CINERGY_200, TUNER_LG_PAL_NEW_TAPC}, {0xf51200e3, EM2800_BOARD_VGEAR_POCKETTV, TUNER_LG_PAL_NEW_TAPC}, {0x1ba50080, EM2860_BOARD_SAA711X_REFERENCE_DESIGN, TUNER_ABSENT}, {0x77800080, EM2860_BOARD_TVP5150_REFERENCE_DESIGN, TUNER_ABSENT}, {0xc51200e3, EM2820_BOARD_GADMEI_TVR200, TUNER_LG_PAL_NEW_TAPC}, {0x4ba50080, EM2861_BOARD_GADMEI_UTV330PLUS, TUNER_TNF_5335MF}, {0x6b800080, EM2874_BOARD_LEADERSHIP_ISDBT, TUNER_ABSENT}, {0x27e10080, EM2882_BOARD_ZOLID_HYBRID_TV_STICK, TUNER_XC2028}, {0x840d0484, EM2860_BOARD_MYGICA_UTV3, TUNER_TENA_TNF_931D_DFDR1}, }; /* NOTE: introduce a separate hash table for devices with 16 bit eeproms */ int em28xx_tuner_callback(void *ptr, int component, int command, int arg) { struct em28xx_i2c_bus *i2c_bus = ptr; struct em28xx *dev = i2c_bus->dev; int rc = 0; if (dev->tuner_type != TUNER_XC2028 && dev->tuner_type != TUNER_XC5000) return 0; if (command != XC2028_TUNER_RESET && command != XC5000_TUNER_RESET) return 0; rc = em28xx_gpio_set(dev, dev->board.tuner_gpio); return rc; } EXPORT_SYMBOL_GPL(em28xx_tuner_callback); static inline void em28xx_set_xclk_i2c_speed(struct em28xx *dev) { const struct em28xx_board *board = &em28xx_boards[dev->model]; u8 xclk = board->xclk, i2c_speed = board->i2c_speed; /* * Those are the default values for the majority of boards * Use those values if not specified otherwise at boards entry */ if (!xclk) xclk = EM28XX_XCLK_IR_RC5_MODE | EM28XX_XCLK_FREQUENCY_12MHZ; em28xx_write_reg(dev, EM28XX_R0F_XCLK, xclk); if (!i2c_speed) i2c_speed = EM28XX_I2C_CLK_WAIT_ENABLE | EM28XX_I2C_FREQ_100_KHZ; dev->i2c_speed = i2c_speed & 0x03; if (!dev->board.is_em2800) em28xx_write_reg(dev, EM28XX_R06_I2C_CLK, i2c_speed); msleep(50); } static inline void em28xx_set_model(struct em28xx *dev) { dev->board = em28xx_boards[dev->model]; dev->has_msp34xx = dev->board.has_msp34xx; dev->is_webcam = dev->board.is_webcam; em28xx_set_xclk_i2c_speed(dev); /* Should be initialized early, for I2C to work */ dev->def_i2c_bus = dev->board.def_i2c_bus; } /* * Wait until AC97_RESET reports the expected value reliably before proceeding. * We also check that two unrelated registers accesses don't return the same * value to avoid premature return. * This procedure helps ensuring AC97 register accesses are reliable. */ static int em28xx_wait_until_ac97_features_equals(struct em28xx *dev, int expected_feat) { unsigned long timeout = jiffies + msecs_to_jiffies(2000); int feat, powerdown; while (time_is_after_jiffies(timeout)) { feat = em28xx_read_ac97(dev, AC97_RESET); if (feat < 0) return feat; powerdown = em28xx_read_ac97(dev, AC97_POWERDOWN); if (powerdown < 0) return powerdown; if (feat == expected_feat && feat != powerdown) return 0; msleep(50); } dev_warn(&dev->intf->dev, "AC97 registers access is not reliable !\n"); return -ETIMEDOUT; } /* * Since em28xx_pre_card_setup() requires a proper dev->model, * this won't work for boards with generic PCI IDs */ static void em28xx_pre_card_setup(struct em28xx *dev) { /* * Set the initial XCLK and I2C clock values based on the board * definition */ em28xx_set_xclk_i2c_speed(dev); /* request some modules */ switch (dev->model) { case EM2861_BOARD_PLEXTOR_PX_TV100U: /* Sets the msp34xx I2S speed */ dev->i2s_speed = 2048000; break; case EM2861_BOARD_KWORLD_PVRTV_300U: case EM2880_BOARD_KWORLD_DVB_305U: em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0x6d); usleep_range(10000, 11000); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0x7d); usleep_range(10000, 11000); break; case EM2870_BOARD_COMPRO_VIDEOMATE: /* * TODO: someone can do some cleanup here... * not everything's needed */ em28xx_write_reg(dev, EM2880_R04_GPO, 0x00); usleep_range(10000, 11000); em28xx_write_reg(dev, EM2880_R04_GPO, 0x01); usleep_range(10000, 11000); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfd); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfc); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xdc); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfc); msleep(70); break; case EM2870_BOARD_TERRATEC_XS_MT2060: /* * this device needs some gpio writes to get the DVB-T * demod work */ em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfe); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xde); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfe); msleep(70); break; case EM2870_BOARD_PINNACLE_PCTV_DVB: /* * this device needs some gpio writes to get the * DVB-T demod work */ em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfe); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xde); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfe); msleep(70); break; case EM2820_BOARD_GADMEI_UTV310: case EM2820_BOARD_MSI_VOX_USB_2: /* enables audio for that devices */ em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfd); break; case EM2882_BOARD_KWORLD_ATSC_315U: em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xff); usleep_range(10000, 11000); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfe); usleep_range(10000, 11000); em28xx_write_reg(dev, EM2880_R04_GPO, 0x00); usleep_range(10000, 11000); em28xx_write_reg(dev, EM2880_R04_GPO, 0x08); usleep_range(10000, 11000); break; case EM2860_BOARD_KAIOMY_TVNPC_U2: em28xx_write_regs(dev, EM28XX_R0F_XCLK, "\x07", 1); em28xx_write_regs(dev, EM28XX_R06_I2C_CLK, "\x40", 1); em28xx_write_regs(dev, 0x0d, "\x42", 1); em28xx_write_regs(dev, 0x08, "\xfd", 1); usleep_range(10000, 11000); em28xx_write_regs(dev, 0x08, "\xff", 1); usleep_range(10000, 11000); em28xx_write_regs(dev, 0x08, "\x7f", 1); usleep_range(10000, 11000); em28xx_write_regs(dev, 0x08, "\x6b", 1); break; case EM2860_BOARD_EASYCAP: em28xx_write_regs(dev, 0x08, "\xf8", 1); break; case EM2820_BOARD_IODATA_GVMVP_SZ: em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xff); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xf7); usleep_range(10000, 11000); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfe); msleep(70); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfd); msleep(70); break; case EM2860_BOARD_TERRATEC_GRABBY: /* * HACK?: Ensure AC97 register reading is reliable before * proceeding. In practice, this will wait about 1.6 seconds. */ em28xx_wait_until_ac97_features_equals(dev, 0x6a90); break; } em28xx_gpio_set(dev, dev->board.tuner_gpio); em28xx_set_mode(dev, EM28XX_ANALOG_MODE); /* Unlock device */ em28xx_set_mode(dev, EM28XX_SUSPEND); } static int em28xx_hint_board(struct em28xx *dev) { int i; if (dev->is_webcam) { if (dev->em28xx_sensor == EM28XX_MT9V011) { dev->model = EM2820_BOARD_SILVERCREST_WEBCAM; } else if (dev->em28xx_sensor == EM28XX_MT9M001 || dev->em28xx_sensor == EM28XX_MT9M111) { dev->model = EM2750_BOARD_UNKNOWN; } /* FIXME: IMPROVE ! */ return 0; } /* * HINT method: EEPROM * * This method works only for boards with eeprom. * Uses a hash of all eeprom bytes. The hash should be * unique for a vendor/tuner pair. * There are a high chance that tuners for different * video standards produce different hashes. */ for (i = 0; i < ARRAY_SIZE(em28xx_eeprom_hash); i++) { if (dev->hash == em28xx_eeprom_hash[i].hash) { dev->model = em28xx_eeprom_hash[i].model; dev->tuner_type = em28xx_eeprom_hash[i].tuner; dev_err(&dev->intf->dev, "Your board has no unique USB ID.\n" "A hint were successfully done, based on eeprom hash.\n" "This method is not 100%% failproof.\n" "If the board were misdetected, please email this log to:\n" "\tV4L Mailing List <linux-media@vger.kernel.org>\n" "Board detected as %s\n", em28xx_boards[dev->model].name); return 0; } } /* * HINT method: I2C attached devices * * This method works for all boards. * Uses a hash of i2c scanned devices. * Devices with the same i2c attached chips will * be considered equal. * This method is less precise than the eeprom one. */ /* user did not request i2c scanning => do it now */ if (!dev->i2c_hash) em28xx_do_i2c_scan(dev, dev->def_i2c_bus); for (i = 0; i < ARRAY_SIZE(em28xx_i2c_hash); i++) { if (dev->i2c_hash == em28xx_i2c_hash[i].hash) { dev->model = em28xx_i2c_hash[i].model; dev->tuner_type = em28xx_i2c_hash[i].tuner; dev_err(&dev->intf->dev, "Your board has no unique USB ID.\n" "A hint were successfully done, based on i2c devicelist hash.\n" "This method is not 100%% failproof.\n" "If the board were misdetected, please email this log to:\n" "\tV4L Mailing List <linux-media@vger.kernel.org>\n" "Board detected as %s\n", em28xx_boards[dev->model].name); return 0; } } dev_err(&dev->intf->dev, "Your board has no unique USB ID and thus need a hint to be detected.\n" "You may try to use card=<n> insmod option to workaround that.\n" "Please send an email with this log to:\n" "\tV4L Mailing List <linux-media@vger.kernel.org>\n" "Board eeprom hash is 0x%08lx\n" "Board i2c devicelist hash is 0x%08lx\n", dev->hash, dev->i2c_hash); dev_err(&dev->intf->dev, "Here is a list of valid choices for the card=<n> insmod option:\n"); for (i = 0; i < em28xx_bcount; i++) { dev_err(&dev->intf->dev, " card=%d -> %s\n", i, em28xx_boards[i].name); } return -1; } static void em28xx_card_setup(struct em28xx *dev) { int i, j, idx; bool duplicate_entry; /* * If the device can be a webcam, seek for a sensor. * If sensor is not found, then it isn't a webcam. */ if (dev->is_webcam) { em28xx_detect_sensor(dev); if (dev->em28xx_sensor == EM28XX_NOSENSOR) /* NOTE: error/unknown sensor/no sensor */ dev->is_webcam = 0; } switch (dev->model) { case EM2750_BOARD_UNKNOWN: case EM2820_BOARD_UNKNOWN: case EM2800_BOARD_UNKNOWN: /* * The K-WORLD DVB-T 310U is detected as an MSI Digivox AD. * * This occurs because they share identical USB vendor and * product IDs. * * What we do here is look up the EEPROM hash of the K-WORLD * and if it is found then we decide that we do not have * a DIGIVOX and reset the device to the K-WORLD instead. * * This solution is only valid if they do not share eeprom * hash identities which has not been determined as yet. */ if (em28xx_hint_board(dev) < 0) { dev_err(&dev->intf->dev, "Board not discovered\n"); } else { em28xx_set_model(dev); em28xx_pre_card_setup(dev); } break; default: em28xx_set_model(dev); } dev_info(&dev->intf->dev, "Identified as %s (card=%d)\n", dev->board.name, dev->model); dev->tuner_type = em28xx_boards[dev->model].tuner_type; /* request some modules */ switch (dev->model) { case EM2820_BOARD_HAUPPAUGE_WINTV_USB_2: case EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900: case EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900_R2: case EM2883_BOARD_HAUPPAUGE_WINTV_HVR_850: case EM2883_BOARD_HAUPPAUGE_WINTV_HVR_950: case EM2884_BOARD_HAUPPAUGE_WINTV_HVR_930C: case EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_DVB: case EM28174_BOARD_HAUPPAUGE_WINTV_DUALHD_01595: { struct tveeprom tv; if (!dev->eedata) break; #if defined(CONFIG_MODULES) && defined(MODULE) request_module("tveeprom"); #endif /* Call first TVeeprom */ tveeprom_hauppauge_analog(&tv, dev->eedata); dev->tuner_type = tv.tuner_type; if (tv.audio_processor == TVEEPROM_AUDPROC_MSP) { dev->i2s_speed = 2048000; dev->has_msp34xx = 1; } break; } case EM2882_BOARD_KWORLD_ATSC_315U: em28xx_write_reg(dev, 0x0d, 0x42); usleep_range(10000, 11000); em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xfd); usleep_range(10000, 11000); break; case EM2820_BOARD_KWORLD_PVRTV2800RF: /* GPIO enables sound on KWORLD PVR TV 2800RF */ em28xx_write_reg(dev, EM2820_R08_GPIO_CTRL, 0xf9); break; case EM2820_BOARD_UNKNOWN: case EM2800_BOARD_UNKNOWN: /* * The K-WORLD DVB-T 310U is detected as an MSI Digivox AD. * * This occurs because they share identical USB vendor and * product IDs. * * What we do here is look up the EEPROM hash of the K-WORLD * and if it is found then we decide that we do not have * a DIGIVOX and reset the device to the K-WORLD instead. * * This solution is only valid if they do not share eeprom * hash identities which has not been determined as yet. */ case EM2880_BOARD_MSI_DIGIVOX_AD: if (!em28xx_hint_board(dev)) em28xx_set_model(dev); /* * In cases where we had to use a board hint, the call to * em28xx_set_mode() in em28xx_pre_card_setup() was a no-op, * so make the call now so the analog GPIOs are set properly * before probing the i2c bus. */ em28xx_gpio_set(dev, dev->board.tuner_gpio); em28xx_set_mode(dev, EM28XX_ANALOG_MODE); break; /* * The Dikom DK300 is detected as an Kworld VS-DVB-T 323UR. * * This occurs because they share identical USB vendor and * product IDs. * * What we do here is look up the EEPROM hash of the Dikom * and if it is found then we decide that we do not have * a Kworld and reset the device to the Dikom instead. * * This solution is only valid if they do not share eeprom * hash identities which has not been determined as yet. */ case EM2882_BOARD_KWORLD_VS_DVBT: if (!em28xx_hint_board(dev)) em28xx_set_model(dev); /* * In cases where we had to use a board hint, the call to * em28xx_set_mode() in em28xx_pre_card_setup() was a no-op, * so make the call now so the analog GPIOs are set properly * before probing the i2c bus. */ em28xx_gpio_set(dev, dev->board.tuner_gpio); em28xx_set_mode(dev, EM28XX_ANALOG_MODE); break; } if (dev->board.valid == EM28XX_BOARD_NOT_VALIDATED) { dev_err(&dev->intf->dev, "\n\n" "The support for this board weren't valid yet.\n" "Please send a report of having this working\n" "not to V4L mailing list (and/or to other addresses)\n\n"); } /* Free eeprom data memory */ kfree(dev->eedata); dev->eedata = NULL; /* Allow override tuner type by a module parameter */ if (tuner >= 0) dev->tuner_type = tuner; /* * Dynamically generate a list of valid audio inputs for this * specific board, mapping them via enum em28xx_amux. */ idx = 0; for (i = 0; i < MAX_EM28XX_INPUT; i++) { if (!INPUT(i)->type) continue; /* Skip already mapped audio inputs */ duplicate_entry = false; for (j = 0; j < idx; j++) { if (INPUT(i)->amux == dev->amux_map[j]) { duplicate_entry = true; break; } } if (duplicate_entry) continue; dev->amux_map[idx++] = INPUT(i)->amux; } for (; idx < MAX_EM28XX_INPUT; idx++) dev->amux_map[idx] = EM28XX_AMUX_UNUSED; } void em28xx_setup_xc3028(struct em28xx *dev, struct xc2028_ctrl *ctl) { memset(ctl, 0, sizeof(*ctl)); ctl->fname = XC2028_DEFAULT_FIRMWARE; ctl->max_len = 64; ctl->mts = em28xx_boards[dev->model].mts_firmware; switch (dev->model) { case EM2880_BOARD_EMPIRE_DUAL_TV: case EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900: case EM2882_BOARD_TERRATEC_HYBRID_XS: case EM2880_BOARD_TERRATEC_HYBRID_XS: case EM2880_BOARD_TERRATEC_HYBRID_XS_FR: case EM2881_BOARD_PINNACLE_HYBRID_PRO: case EM2882_BOARD_ZOLID_HYBRID_TV_STICK: ctl->demod = XC3028_FE_ZARLINK456; break; case EM2880_BOARD_HAUPPAUGE_WINTV_HVR_900_R2: case EM2882_BOARD_PINNACLE_HYBRID_PRO_330E: ctl->demod = XC3028_FE_DEFAULT; break; case EM2880_BOARD_AMD_ATI_TV_WONDER_HD_600: ctl->demod = XC3028_FE_DEFAULT; ctl->fname = XC3028L_DEFAULT_FIRMWARE; break; case EM2883_BOARD_HAUPPAUGE_WINTV_HVR_850: case EM2883_BOARD_HAUPPAUGE_WINTV_HVR_950: case EM2880_BOARD_PINNACLE_PCTV_HD_PRO: /* FIXME: Better to specify the needed IF */ ctl->demod = XC3028_FE_DEFAULT; break; case EM2883_BOARD_KWORLD_HYBRID_330U: case EM2882_BOARD_DIKOM_DK300: case EM2882_BOARD_KWORLD_VS_DVBT: ctl->demod = XC3028_FE_CHINA; ctl->fname = XC2028_DEFAULT_FIRMWARE; break; case EM2882_BOARD_EVGA_INDTUBE: ctl->demod = XC3028_FE_CHINA; ctl->fname = XC3028L_DEFAULT_FIRMWARE; break; default: ctl->demod = XC3028_FE_OREN538; } } EXPORT_SYMBOL_GPL(em28xx_setup_xc3028); static void request_module_async(struct work_struct *work) { struct em28xx *dev = container_of(work, struct em28xx, request_module_wk); /* * The em28xx extensions can be modules or builtin. If the * modules are already loaded or are built in, those extensions * can be initialised right now. Otherwise, the module init * code will do it. */ /* * Devices with an audio-only intf also have a V4L/DVB/RC * intf. Don't register extensions twice on those devices. */ if (dev->is_audio_only) { #if defined(CONFIG_MODULES) && defined(MODULE) request_module("em28xx-alsa"); #endif return; } em28xx_init_extension(dev); #if defined(CONFIG_MODULES) && defined(MODULE) if (dev->has_video) request_module("em28xx-v4l"); if (dev->usb_audio_type == EM28XX_USB_AUDIO_CLASS) request_module("snd-usb-audio"); else if (dev->usb_audio_type == EM28XX_USB_AUDIO_VENDOR) request_module("em28xx-alsa"); if (dev->board.has_dvb) request_module("em28xx-dvb"); if (dev->board.buttons || ((dev->board.ir_codes || dev->board.has_ir_i2c) && !disable_ir)) request_module("em28xx-rc"); #endif /* CONFIG_MODULES */ } static void request_modules(struct em28xx *dev) { INIT_WORK(&dev->request_module_wk, request_module_async); schedule_work(&dev->request_module_wk); } static void flush_request_modules(struct em28xx *dev) { flush_work(&dev->request_module_wk); } static int em28xx_media_device_init(struct em28xx *dev, struct usb_device *udev) { #ifdef CONFIG_MEDIA_CONTROLLER struct media_device *mdev; mdev = kzalloc(sizeof(*mdev), GFP_KERNEL); if (!mdev) return -ENOMEM; if (udev->product) media_device_usb_init(mdev, udev, udev->product); else if (udev->manufacturer) media_device_usb_init(mdev, udev, udev->manufacturer); else media_device_usb_init(mdev, udev, dev_name(&dev->intf->dev)); dev->media_dev = mdev; #endif return 0; } static void em28xx_unregister_media_device(struct em28xx *dev) { #ifdef CONFIG_MEDIA_CONTROLLER if (dev->media_dev) { media_device_unregister(dev->media_dev); media_device_cleanup(dev->media_dev); kfree(dev->media_dev); dev->media_dev = NULL; } #endif } /* * em28xx_release_resources() * unregisters the v4l2,i2c and usb devices * called when the device gets disconnected or at module unload */ static void em28xx_release_resources(struct em28xx *dev) { struct usb_device *udev = interface_to_usbdev(dev->intf); /*FIXME: I2C IR should be disconnected */ mutex_lock(&dev->lock); em28xx_unregister_media_device(dev); if (dev->def_i2c_bus) em28xx_i2c_unregister(dev, 1); em28xx_i2c_unregister(dev, 0); if (dev->ts == PRIMARY_TS) usb_put_dev(udev); /* Mark device as unused */ clear_bit(dev->devno, em28xx_devused); mutex_unlock(&dev->lock); }; /** * em28xx_free_device() - Free em28xx device * * @ref: struct kref for em28xx device * * This is called when all extensions and em28xx core unregisters a device */ void em28xx_free_device(struct kref *ref) { struct em28xx *dev = kref_to_dev(ref); dev_info(&dev->intf->dev, "Freeing device\n"); if (!dev->disconnected) em28xx_release_resources(dev); if (dev->ts == PRIMARY_TS) kfree(dev->alt_max_pkt_size_isoc); kfree(dev); } EXPORT_SYMBOL_GPL(em28xx_free_device); /* * em28xx_init_dev() * allocates and inits the device structs, registers i2c bus and v4l device */ static int em28xx_init_dev(struct em28xx *dev, struct usb_device *udev, struct usb_interface *intf, int minor) { int retval; const char *chip_name = NULL; dev->intf = intf; mutex_init(&dev->ctrl_urb_lock); spin_lock_init(&dev->slock); dev->em28xx_write_regs = em28xx_write_regs; dev->em28xx_read_reg = em28xx_read_reg; dev->em28xx_read_reg_req_len = em28xx_read_reg_req_len; dev->em28xx_write_regs_req = em28xx_write_regs_req; dev->em28xx_read_reg_req = em28xx_read_reg_req; dev->board.is_em2800 = em28xx_boards[dev->model].is_em2800; em28xx_set_model(dev); dev->wait_after_write = 5; /* Based on the Chip ID, set the device configuration */ retval = em28xx_read_reg(dev, EM28XX_R0A_CHIPID); if (retval > 0) { dev->chip_id = retval; switch (dev->chip_id) { case CHIP_ID_EM2800: chip_name = "em2800"; break; case CHIP_ID_EM2710: chip_name = "em2710"; break; case CHIP_ID_EM2750: chip_name = "em2750"; break; case CHIP_ID_EM2765: chip_name = "em2765"; dev->wait_after_write = 0; dev->is_em25xx = 1; dev->eeprom_addrwidth_16bit = 1; break; case CHIP_ID_EM2820: chip_name = "em2710/2820"; if (le16_to_cpu(udev->descriptor.idVendor) == 0xeb1a) { __le16 idProd = udev->descriptor.idProduct; if (le16_to_cpu(idProd) == 0x2710) chip_name = "em2710"; else if (le16_to_cpu(idProd) == 0x2820) chip_name = "em2820"; } /* NOTE: the em2820 is used in webcams, too ! */ break; case CHIP_ID_EM2840: chip_name = "em2840"; break; case CHIP_ID_EM2860: chip_name = "em2860"; break; case CHIP_ID_EM2870: chip_name = "em2870"; dev->wait_after_write = 0; break; case CHIP_ID_EM2874: chip_name = "em2874"; dev->wait_after_write = 0; dev->eeprom_addrwidth_16bit = 1; break; case CHIP_ID_EM28174: chip_name = "em28174"; dev->wait_after_write = 0; dev->eeprom_addrwidth_16bit = 1; break; case CHIP_ID_EM28178: chip_name = "em28178"; dev->wait_after_write = 0; dev->eeprom_addrwidth_16bit = 1; break; case CHIP_ID_EM2883: chip_name = "em2882/3"; dev->wait_after_write = 0; break; case CHIP_ID_EM2884: chip_name = "em2884"; dev->wait_after_write = 0; dev->eeprom_addrwidth_16bit = 1; break; } } if (!chip_name) dev_info(&dev->intf->dev, "unknown em28xx chip ID (%d)\n", dev->chip_id); else dev_info(&dev->intf->dev, "chip ID is %s\n", chip_name); em28xx_media_device_init(dev, udev); if (dev->is_audio_only) { retval = em28xx_audio_setup(dev); if (retval) { retval = -ENODEV; goto err_deinit_media; } em28xx_init_extension(dev); return 0; } em28xx_pre_card_setup(dev); rt_mutex_init(&dev->i2c_bus_lock); /* register i2c bus 0 */ if (dev->board.is_em2800) retval = em28xx_i2c_register(dev, 0, EM28XX_I2C_ALGO_EM2800); else retval = em28xx_i2c_register(dev, 0, EM28XX_I2C_ALGO_EM28XX); if (retval < 0) { dev_err(&dev->intf->dev, "%s: em28xx_i2c_register bus 0 - error [%d]!\n", __func__, retval); goto err_deinit_media; } /* register i2c bus 1 */ if (dev->def_i2c_bus) { if (dev->is_em25xx) retval = em28xx_i2c_register(dev, 1, EM28XX_I2C_ALGO_EM25XX_BUS_B); else retval = em28xx_i2c_register(dev, 1, EM28XX_I2C_ALGO_EM28XX); if (retval < 0) { dev_err(&dev->intf->dev, "%s: em28xx_i2c_register bus 1 - error [%d]!\n", __func__, retval); goto err_unreg_i2c; } } /* Do board specific init and eeprom reading */ em28xx_card_setup(dev); return 0; err_unreg_i2c: em28xx_i2c_unregister(dev, 0); err_deinit_media: em28xx_unregister_media_device(dev); return retval; } static int em28xx_duplicate_dev(struct em28xx *dev) { int nr; struct em28xx *sec_dev = kmemdup(dev, sizeof(*sec_dev), GFP_KERNEL); if (!sec_dev) { dev->dev_next = NULL; return -ENOMEM; } /* Check to see next free device and mark as used */ do { nr = find_first_zero_bit(em28xx_devused, EM28XX_MAXBOARDS); if (nr >= EM28XX_MAXBOARDS) { /* No free device slots */ dev_warn(&dev->intf->dev, ": Supports only %i em28xx boards.\n", EM28XX_MAXBOARDS); kfree(sec_dev); dev->dev_next = NULL; return -ENOMEM; } } while (test_and_set_bit(nr, em28xx_devused)); sec_dev->devno = nr; snprintf(sec_dev->name, 28, "em28xx #%d", nr); sec_dev->dev_next = NULL; dev->dev_next = sec_dev; return 0; } /* high bandwidth multiplier, as encoded in highspeed endpoint descriptors */ #define hb_mult(wMaxPacketSize) (1 + (((wMaxPacketSize) >> 11) & 0x03)) static void em28xx_check_usb_descriptor(struct em28xx *dev, struct usb_device *udev, struct usb_interface *intf, int alt, int ep, bool *has_vendor_audio, bool *has_video, bool *has_dvb) { const struct usb_endpoint_descriptor *e; int sizedescr, size; /* * NOTE: * * Old logic with support for isoc transfers only was: * 0x82 isoc => analog * 0x83 isoc => audio * 0x84 isoc => digital * * New logic with support for bulk transfers * 0x82 isoc => analog * 0x82 bulk => analog * 0x83 isoc* => audio * 0x84 isoc => digital * 0x84 bulk => analog or digital** * 0x85 isoc => digital TS2 * 0x85 bulk => digital TS2 * (*: audio should always be isoc) * (**: analog, if ep 0x82 is isoc, otherwise digital) * * The new logic preserves backwards compatibility and * reflects the endpoint configurations we have seen * so far. But there might be devices for which this * logic is not sufficient... */ e = &intf->altsetting[alt].endpoint[ep].desc; if (!usb_endpoint_dir_in(e)) return; sizedescr = le16_to_cpu(e->wMaxPacketSize); size = sizedescr & 0x7ff; if (udev->speed == USB_SPEED_HIGH) size = size * hb_mult(sizedescr); /* Only inspect input endpoints */ switch (e->bEndpointAddress) { case 0x82: *has_video = true; if (usb_endpoint_xfer_isoc(e)) { dev->analog_ep_isoc = e->bEndpointAddress; dev->alt_max_pkt_size_isoc[alt] = size; } else if (usb_endpoint_xfer_bulk(e)) { dev->analog_ep_bulk = e->bEndpointAddress; } return; case 0x83: if (usb_endpoint_xfer_isoc(e)) *has_vendor_audio = true; else dev_err(&intf->dev, "error: skipping audio endpoint 0x83, because it uses bulk transfers !\n"); return; case 0x84: if (*has_video && (usb_endpoint_xfer_bulk(e))) { dev->analog_ep_bulk = e->bEndpointAddress; } else { if (usb_endpoint_xfer_isoc(e)) { if (size > dev->dvb_max_pkt_size_isoc) { /* * 2) some manufacturers (e.g. Terratec) * disable endpoints by setting * wMaxPacketSize to 0 bytes for all * alt settings. So far, we've seen * this for DVB isoc endpoints only. */ *has_dvb = true; dev->dvb_ep_isoc = e->bEndpointAddress; dev->dvb_max_pkt_size_isoc = size; dev->dvb_alt_isoc = alt; } } else { *has_dvb = true; dev->dvb_ep_bulk = e->bEndpointAddress; } } return; case 0x85: if (usb_endpoint_xfer_isoc(e)) { if (size > dev->dvb_max_pkt_size_isoc_ts2) { dev->dvb_ep_isoc_ts2 = e->bEndpointAddress; dev->dvb_max_pkt_size_isoc_ts2 = size; dev->dvb_alt_isoc = alt; } } else { dev->dvb_ep_bulk_ts2 = e->bEndpointAddress; } return; } } /* * em28xx_usb_probe() * checks for supported devices */ static int em28xx_usb_probe(struct usb_interface *intf, const struct usb_device_id *id) { struct usb_device *udev; struct em28xx *dev = NULL; int retval; bool has_vendor_audio = false, has_video = false, has_dvb = false; int i, nr, try_bulk; const int ifnum = intf->altsetting[0].desc.bInterfaceNumber; char *speed; udev = usb_get_dev(interface_to_usbdev(intf)); /* Check to see next free device and mark as used */ do { nr = find_first_zero_bit(em28xx_devused, EM28XX_MAXBOARDS); if (nr >= EM28XX_MAXBOARDS) { /* No free device slots */ dev_err(&intf->dev, "Driver supports up to %i em28xx boards.\n", EM28XX_MAXBOARDS); retval = -ENOMEM; goto err_no_slot; } } while (test_and_set_bit(nr, em28xx_devused)); /* Don't register audio interfaces */ if (intf->altsetting[0].desc.bInterfaceClass == USB_CLASS_AUDIO) { dev_info(&intf->dev, "audio device (%04x:%04x): interface %i, class %i\n", le16_to_cpu(udev->descriptor.idVendor), le16_to_cpu(udev->descriptor.idProduct), ifnum, intf->altsetting[0].desc.bInterfaceClass); retval = -ENODEV; goto err; } /* allocate memory for our device state and initialize it */ dev = kzalloc(sizeof(*dev), GFP_KERNEL); if (!dev) { retval = -ENOMEM; goto err; } /* compute alternate max packet sizes */ dev->alt_max_pkt_size_isoc = kcalloc(intf->num_altsetting, sizeof(dev->alt_max_pkt_size_isoc[0]), GFP_KERNEL); if (!dev->alt_max_pkt_size_isoc) { kfree(dev); retval = -ENOMEM; goto err; } /* Get endpoints */ for (i = 0; i < intf->num_altsetting; i++) { int ep; for (ep = 0; ep < intf->altsetting[i].desc.bNumEndpoints; ep++) em28xx_check_usb_descriptor(dev, udev, intf, i, ep, &has_vendor_audio, &has_video, &has_dvb); } if (!(has_vendor_audio || has_video || has_dvb)) { retval = -ENODEV; goto err_free; } switch (udev->speed) { case USB_SPEED_LOW: speed = "1.5"; break; case USB_SPEED_UNKNOWN: case USB_SPEED_FULL: speed = "12"; break; case USB_SPEED_HIGH: speed = "480"; break; default: speed = "unknown"; } dev_info(&intf->dev, "New device %s %s @ %s Mbps (%04x:%04x, interface %d, class %d)\n", udev->manufacturer ? udev->manufacturer : "", udev->product ? udev->product : "", speed, le16_to_cpu(udev->descriptor.idVendor), le16_to_cpu(udev->descriptor.idProduct), ifnum, intf->altsetting->desc.bInterfaceNumber); /* * Make sure we have 480 Mbps of bandwidth, otherwise things like * video stream wouldn't likely work, since 12 Mbps is generally * not enough even for most Digital TV streams. */ if (udev->speed != USB_SPEED_HIGH && disable_usb_speed_check == 0) { dev_err(&intf->dev, "Device initialization failed.\n"); dev_err(&intf->dev, "Device must be connected to a high-speed USB 2.0 port.\n"); retval = -ENODEV; goto err_free; } kref_init(&dev->ref); dev->devno = nr; dev->model = id->driver_info; dev->alt = -1; dev->is_audio_only = has_vendor_audio && !(has_video || has_dvb); dev->has_video = has_video; dev->ifnum = ifnum; dev->ts = PRIMARY_TS; snprintf(dev->name, 28, "em28xx"); dev->dev_next = NULL; if (has_vendor_audio) { dev_info(&intf->dev, "Audio interface %i found (Vendor Class)\n", ifnum); dev->usb_audio_type = EM28XX_USB_AUDIO_VENDOR; } /* Checks if audio is provided by a USB Audio Class intf */ for (i = 0; i < udev->config->desc.bNumInterfaces; i++) { struct usb_interface *uif = udev->config->interface[i]; if (uif->altsetting[0].desc.bInterfaceClass == USB_CLASS_AUDIO) { if (has_vendor_audio) dev_err(&intf->dev, "em28xx: device seems to have vendor AND usb audio class interfaces !\n" "\t\tThe vendor interface will be ignored. Please contact the developers <linux-media@vger.kernel.org>\n"); dev->usb_audio_type = EM28XX_USB_AUDIO_CLASS; break; } } if (has_video) dev_info(&intf->dev, "Video interface %i found:%s%s\n", ifnum, dev->analog_ep_bulk ? " bulk" : "", dev->analog_ep_isoc ? " isoc" : ""); if (has_dvb) dev_info(&intf->dev, "DVB interface %i found:%s%s\n", ifnum, dev->dvb_ep_bulk ? " bulk" : "", dev->dvb_ep_isoc ? " isoc" : ""); dev->num_alt = intf->num_altsetting; if ((unsigned int)card[nr] < em28xx_bcount) dev->model = card[nr]; /* save our data pointer in this intf device */ usb_set_intfdata(intf, dev); /* allocate device struct and check if the device is a webcam */ mutex_init(&dev->lock); retval = em28xx_init_dev(dev, udev, intf, nr); if (retval) goto err_free; if (usb_xfer_mode < 0) { if (dev->is_webcam) try_bulk = 1; else try_bulk = 0; } else { try_bulk = usb_xfer_mode > 0; } /* Disable V4L2 if the device doesn't have a decoder or image sensor */ if (has_video && dev->board.decoder == EM28XX_NODECODER && dev->em28xx_sensor == EM28XX_NOSENSOR) { dev_err(&intf->dev, "Currently, V4L2 is not supported on this model\n"); has_video = false; dev->has_video = false; } if (dev->board.has_dual_ts && (dev->tuner_type != TUNER_ABSENT || INPUT(0)->type)) { /* * The logic with sets alternate is not ready for dual-tuners * which analog modes. */ dev_err(&intf->dev, "We currently don't support analog TV or stream capture on dual tuners.\n"); has_video = false; } /* Select USB transfer types to use */ if (has_video) { if (!dev->analog_ep_isoc || (try_bulk && dev->analog_ep_bulk)) dev->analog_xfer_bulk = 1; dev_info(&intf->dev, "analog set to %s mode.\n", dev->analog_xfer_bulk ? "bulk" : "isoc"); } if (has_dvb) { if (!dev->dvb_ep_isoc || (try_bulk && dev->dvb_ep_bulk)) dev->dvb_xfer_bulk = 1; dev_info(&intf->dev, "dvb set to %s mode.\n", dev->dvb_xfer_bulk ? "bulk" : "isoc"); } if (dev->board.has_dual_ts && em28xx_duplicate_dev(dev) == 0) { kref_init(&dev->dev_next->ref); dev->dev_next->ts = SECONDARY_TS; dev->dev_next->alt = -1; dev->dev_next->is_audio_only = has_vendor_audio && !(has_video || has_dvb); dev->dev_next->has_video = false; dev->dev_next->ifnum = ifnum; dev->dev_next->model = id->driver_info; mutex_init(&dev->dev_next->lock); retval = em28xx_init_dev(dev->dev_next, udev, intf, dev->dev_next->devno); if (retval) goto err_free; dev->dev_next->board.ir_codes = NULL; /* No IR for 2nd tuner */ dev->dev_next->board.has_ir_i2c = 0; /* No IR for 2nd tuner */ if (usb_xfer_mode < 0) { if (dev->dev_next->is_webcam) try_bulk = 1; else try_bulk = 0; } else { try_bulk = usb_xfer_mode > 0; } /* Select USB transfer types to use */ if (has_dvb) { if (!dev->dvb_ep_isoc_ts2 || (try_bulk && dev->dvb_ep_bulk_ts2)) dev->dev_next->dvb_xfer_bulk = 1; dev_info(&dev->intf->dev, "dvb ts2 set to %s mode.\n", dev->dev_next->dvb_xfer_bulk ? "bulk" : "isoc"); } dev->dev_next->dvb_ep_isoc = dev->dvb_ep_isoc_ts2; dev->dev_next->dvb_ep_bulk = dev->dvb_ep_bulk_ts2; dev->dev_next->dvb_max_pkt_size_isoc = dev->dvb_max_pkt_size_isoc_ts2; dev->dev_next->dvb_alt_isoc = dev->dvb_alt_isoc; /* Configure hardware to support TS2*/ if (dev->dvb_xfer_bulk) { /* The ep4 and ep5 are configured for BULK */ em28xx_write_reg(dev, 0x0b, 0x96); mdelay(100); em28xx_write_reg(dev, 0x0b, 0x80); mdelay(100); } else { /* The ep4 and ep5 are configured for ISO */ em28xx_write_reg(dev, 0x0b, 0x96); mdelay(100); em28xx_write_reg(dev, 0x0b, 0x82); mdelay(100); } } request_modules(dev); /* * Do it at the end, to reduce dynamic configuration changes during * the device init. Yet, as request_modules() can be async, the * topology will likely change after the load of the em28xx subdrivers. */ #ifdef CONFIG_MEDIA_CONTROLLER /* * No need to check the return value, the device will still be * usable without media controller API. */ retval = media_device_register(dev->media_dev); #endif return 0; err_free: kfree(dev->alt_max_pkt_size_isoc); kfree(dev); err: clear_bit(nr, em28xx_devused); err_no_slot: usb_put_dev(udev); return retval; } /* * em28xx_usb_disconnect() * called when the device gets disconnected * video device will be unregistered on v4l2_close in case it is still open */ static void em28xx_usb_disconnect(struct usb_interface *intf) { struct em28xx *dev; dev = usb_get_intfdata(intf); usb_set_intfdata(intf, NULL); if (!dev) return; if (dev->dev_next) { dev->dev_next->disconnected = 1; dev_info(&dev->intf->dev, "Disconnecting %s\n", dev->dev_next->name); } dev->disconnected = 1; dev_info(&dev->intf->dev, "Disconnecting %s\n", dev->name); flush_request_modules(dev); em28xx_close_extension(dev); if (dev->dev_next) em28xx_release_resources(dev->dev_next); em28xx_release_resources(dev); if (dev->dev_next) { kref_put(&dev->dev_next->ref, em28xx_free_device); dev->dev_next = NULL; } kref_put(&dev->ref, em28xx_free_device); } static int em28xx_usb_suspend(struct usb_interface *intf, pm_message_t message) { struct em28xx *dev; dev = usb_get_intfdata(intf); if (!dev) return 0; em28xx_suspend_extension(dev); return 0; } static int em28xx_usb_resume(struct usb_interface *intf) { struct em28xx *dev; dev = usb_get_intfdata(intf); if (!dev) return 0; em28xx_resume_extension(dev); return 0; } static struct usb_driver em28xx_usb_driver = { .name = "em28xx", .probe = em28xx_usb_probe, .disconnect = em28xx_usb_disconnect, .suspend = em28xx_usb_suspend, .resume = em28xx_usb_resume, .reset_resume = em28xx_usb_resume, .id_table = em28xx_id_table, }; module_usb_driver(em28xx_usb_driver); |
7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 | // SPDX-License-Identifier: GPL-2.0-or-later /* * net/core/netclassid_cgroup.c Classid Cgroupfs Handling * * Authors: Thomas Graf <tgraf@suug.ch> */ #include <linux/slab.h> #include <linux/cgroup.h> #include <linux/fdtable.h> #include <linux/sched/task.h> #include <net/cls_cgroup.h> #include <net/sock.h> static inline struct cgroup_cls_state *css_cls_state(struct cgroup_subsys_state *css) { return css ? container_of(css, struct cgroup_cls_state, css) : NULL; } struct cgroup_cls_state *task_cls_state(struct task_struct *p) { return css_cls_state(task_css_check(p, net_cls_cgrp_id, rcu_read_lock_bh_held())); } EXPORT_SYMBOL_GPL(task_cls_state); static struct cgroup_subsys_state * cgrp_css_alloc(struct cgroup_subsys_state *parent_css) { struct cgroup_cls_state *cs; cs = kzalloc(sizeof(*cs), GFP_KERNEL); if (!cs) return ERR_PTR(-ENOMEM); return &cs->css; } static int cgrp_css_online(struct cgroup_subsys_state *css) { struct cgroup_cls_state *cs = css_cls_state(css); struct cgroup_cls_state *parent = css_cls_state(css->parent); if (parent) cs->classid = parent->classid; return 0; } static void cgrp_css_free(struct cgroup_subsys_state *css) { kfree(css_cls_state(css)); } /* * To avoid freezing of sockets creation for tasks with big number of threads * and opened sockets lets release file_lock every 1000 iterated descriptors. * New sockets will already have been created with new classid. */ struct update_classid_context { u32 classid; unsigned int batch; }; #define UPDATE_CLASSID_BATCH 1000 static int update_classid_sock(const void *v, struct file *file, unsigned int n) { struct update_classid_context *ctx = (void *)v; struct socket *sock = sock_from_file(file); if (sock) sock_cgroup_set_classid(&sock->sk->sk_cgrp_data, ctx->classid); if (--ctx->batch == 0) { ctx->batch = UPDATE_CLASSID_BATCH; return n + 1; } return 0; } static void update_classid_task(struct task_struct *p, u32 classid) { struct update_classid_context ctx = { .classid = classid, .batch = UPDATE_CLASSID_BATCH }; unsigned int fd = 0; /* Only update the leader task, when many threads in this task, * so it can avoid the useless traversal. */ if (p != p->group_leader) return; do { task_lock(p); fd = iterate_fd(p->files, fd, update_classid_sock, &ctx); task_unlock(p); cond_resched(); } while (fd); } static void cgrp_attach(struct cgroup_taskset *tset) { struct cgroup_subsys_state *css; struct task_struct *p; cgroup_taskset_for_each(p, css, tset) { update_classid_task(p, css_cls_state(css)->classid); } } static u64 read_classid(struct cgroup_subsys_state *css, struct cftype *cft) { return css_cls_state(css)->classid; } static int write_classid(struct cgroup_subsys_state *css, struct cftype *cft, u64 value) { struct cgroup_cls_state *cs = css_cls_state(css); struct css_task_iter it; struct task_struct *p; cs->classid = (u32)value; css_task_iter_start(css, 0, &it); while ((p = css_task_iter_next(&it))) update_classid_task(p, cs->classid); css_task_iter_end(&it); return 0; } static struct cftype ss_files[] = { { .name = "classid", .read_u64 = read_classid, .write_u64 = write_classid, }, { } /* terminate */ }; struct cgroup_subsys net_cls_cgrp_subsys = { .css_alloc = cgrp_css_alloc, .css_online = cgrp_css_online, .css_free = cgrp_css_free, .attach = cgrp_attach, .legacy_cftypes = ss_files, }; |
5 3 4 3 8 8 8 6 8 4 7 6 7 4 8 7 7 6 6 3 7 2 7 8 7 7 8 2 8 2 2 2 3 2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 | // SPDX-License-Identifier: GPL-2.0-only /* * TCP Westwood+: end-to-end bandwidth estimation for TCP * * Angelo Dell'Aera: author of the first version of TCP Westwood+ in Linux 2.4 * * Support at http://c3lab.poliba.it/index.php/Westwood * Main references in literature: * * - Mascolo S, Casetti, M. Gerla et al. * "TCP Westwood: bandwidth estimation for TCP" Proc. ACM Mobicom 2001 * * - A. Grieco, s. Mascolo * "Performance evaluation of New Reno, Vegas, Westwood+ TCP" ACM Computer * Comm. Review, 2004 * * - A. Dell'Aera, L. Grieco, S. Mascolo. * "Linux 2.4 Implementation of Westwood+ TCP with Rate-Halving : * A Performance Evaluation Over the Internet" (ICC 2004), Paris, June 2004 * * Westwood+ employs end-to-end bandwidth measurement to set cwnd and * ssthresh after packet loss. The probing phase is as the original Reno. */ #include <linux/mm.h> #include <linux/module.h> #include <linux/skbuff.h> #include <linux/inet_diag.h> #include <net/tcp.h> /* TCP Westwood structure */ struct westwood { u32 bw_ns_est; /* first bandwidth estimation..not too smoothed 8) */ u32 bw_est; /* bandwidth estimate */ u32 rtt_win_sx; /* here starts a new evaluation... */ u32 bk; u32 snd_una; /* used for evaluating the number of acked bytes */ u32 cumul_ack; u32 accounted; u32 rtt; u32 rtt_min; /* minimum observed RTT */ u8 first_ack; /* flag which infers that this is the first ack */ u8 reset_rtt_min; /* Reset RTT min to next RTT sample*/ }; /* TCP Westwood functions and constants */ #define TCP_WESTWOOD_RTT_MIN (HZ/20) /* 50ms */ #define TCP_WESTWOOD_INIT_RTT (20*HZ) /* maybe too conservative?! */ /* * @tcp_westwood_create * This function initializes fields used in TCP Westwood+, * it is called after the initial SYN, so the sequence numbers * are correct but new passive connections we have no * information about RTTmin at this time so we simply set it to * TCP_WESTWOOD_INIT_RTT. This value was chosen to be too conservative * since in this way we're sure it will be updated in a consistent * way as soon as possible. It will reasonably happen within the first * RTT period of the connection lifetime. */ static void tcp_westwood_init(struct sock *sk) { struct westwood *w = inet_csk_ca(sk); w->bk = 0; w->bw_ns_est = 0; w->bw_est = 0; w->accounted = 0; w->cumul_ack = 0; w->reset_rtt_min = 1; w->rtt_min = w->rtt = TCP_WESTWOOD_INIT_RTT; w->rtt_win_sx = tcp_jiffies32; w->snd_una = tcp_sk(sk)->snd_una; w->first_ack = 1; } /* * @westwood_do_filter * Low-pass filter. Implemented using constant coefficients. */ static inline u32 westwood_do_filter(u32 a, u32 b) { return ((7 * a) + b) >> 3; } static void westwood_filter(struct westwood *w, u32 delta) { /* If the filter is empty fill it with the first sample of bandwidth */ if (w->bw_ns_est == 0 && w->bw_est == 0) { w->bw_ns_est = w->bk / delta; w->bw_est = w->bw_ns_est; } else { w->bw_ns_est = westwood_do_filter(w->bw_ns_est, w->bk / delta); w->bw_est = westwood_do_filter(w->bw_est, w->bw_ns_est); } } /* * @westwood_pkts_acked * Called after processing group of packets. * but all westwood needs is the last sample of srtt. */ static void tcp_westwood_pkts_acked(struct sock *sk, const struct ack_sample *sample) { struct westwood *w = inet_csk_ca(sk); if (sample->rtt_us > 0) w->rtt = usecs_to_jiffies(sample->rtt_us); } /* * @westwood_update_window * It updates RTT evaluation window if it is the right moment to do * it. If so it calls filter for evaluating bandwidth. */ static void westwood_update_window(struct sock *sk) { struct westwood *w = inet_csk_ca(sk); s32 delta = tcp_jiffies32 - w->rtt_win_sx; /* Initialize w->snd_una with the first acked sequence number in order * to fix mismatch between tp->snd_una and w->snd_una for the first * bandwidth sample */ if (w->first_ack) { w->snd_una = tcp_sk(sk)->snd_una; w->first_ack = 0; } /* * See if a RTT-window has passed. * Be careful since if RTT is less than * 50ms we don't filter but we continue 'building the sample'. * This minimum limit was chosen since an estimation on small * time intervals is better to avoid... * Obviously on a LAN we reasonably will always have * right_bound = left_bound + WESTWOOD_RTT_MIN */ if (w->rtt && delta > max_t(u32, w->rtt, TCP_WESTWOOD_RTT_MIN)) { westwood_filter(w, delta); w->bk = 0; w->rtt_win_sx = tcp_jiffies32; } } static inline void update_rtt_min(struct westwood *w) { if (w->reset_rtt_min) { w->rtt_min = w->rtt; w->reset_rtt_min = 0; } else w->rtt_min = min(w->rtt, w->rtt_min); } /* * @westwood_fast_bw * It is called when we are in fast path. In particular it is called when * header prediction is successful. In such case in fact update is * straight forward and doesn't need any particular care. */ static inline void westwood_fast_bw(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); struct westwood *w = inet_csk_ca(sk); westwood_update_window(sk); w->bk += tp->snd_una - w->snd_una; w->snd_una = tp->snd_una; update_rtt_min(w); } /* * @westwood_acked_count * This function evaluates cumul_ack for evaluating bk in case of * delayed or partial acks. */ static inline u32 westwood_acked_count(struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); struct westwood *w = inet_csk_ca(sk); w->cumul_ack = tp->snd_una - w->snd_una; /* If cumul_ack is 0 this is a dupack since it's not moving * tp->snd_una. */ if (!w->cumul_ack) { w->accounted += tp->mss_cache; w->cumul_ack = tp->mss_cache; } if (w->cumul_ack > tp->mss_cache) { /* Partial or delayed ack */ if (w->accounted >= w->cumul_ack) { w->accounted -= w->cumul_ack; w->cumul_ack = tp->mss_cache; } else { w->cumul_ack -= w->accounted; w->accounted = 0; } } w->snd_una = tp->snd_una; return w->cumul_ack; } /* * TCP Westwood * Here limit is evaluated as Bw estimation*RTTmin (for obtaining it * in packets we use mss_cache). Rttmin is guaranteed to be >= 2 * so avoids ever returning 0. */ static u32 tcp_westwood_bw_rttmin(const struct sock *sk) { const struct tcp_sock *tp = tcp_sk(sk); const struct westwood *w = inet_csk_ca(sk); return max_t(u32, (w->bw_est * w->rtt_min) / tp->mss_cache, 2); } static void tcp_westwood_ack(struct sock *sk, u32 ack_flags) { if (ack_flags & CA_ACK_SLOWPATH) { struct westwood *w = inet_csk_ca(sk); westwood_update_window(sk); w->bk += westwood_acked_count(sk); update_rtt_min(w); return; } westwood_fast_bw(sk); } static void tcp_westwood_event(struct sock *sk, enum tcp_ca_event event) { struct tcp_sock *tp = tcp_sk(sk); struct westwood *w = inet_csk_ca(sk); switch (event) { case CA_EVENT_COMPLETE_CWR: tp->snd_ssthresh = tcp_westwood_bw_rttmin(sk); tcp_snd_cwnd_set(tp, tp->snd_ssthresh); break; case CA_EVENT_LOSS: tp->snd_ssthresh = tcp_westwood_bw_rttmin(sk); /* Update RTT_min when next ack arrives */ w->reset_rtt_min = 1; break; default: /* don't care */ break; } } /* Extract info for Tcp socket info provided via netlink. */ static size_t tcp_westwood_info(struct sock *sk, u32 ext, int *attr, union tcp_cc_info *info) { const struct westwood *ca = inet_csk_ca(sk); if (ext & (1 << (INET_DIAG_VEGASINFO - 1))) { info->vegas.tcpv_enabled = 1; info->vegas.tcpv_rttcnt = 0; info->vegas.tcpv_rtt = jiffies_to_usecs(ca->rtt); info->vegas.tcpv_minrtt = jiffies_to_usecs(ca->rtt_min); *attr = INET_DIAG_VEGASINFO; return sizeof(struct tcpvegas_info); } return 0; } static struct tcp_congestion_ops tcp_westwood __read_mostly = { .init = tcp_westwood_init, .ssthresh = tcp_reno_ssthresh, .cong_avoid = tcp_reno_cong_avoid, .undo_cwnd = tcp_reno_undo_cwnd, .cwnd_event = tcp_westwood_event, .in_ack_event = tcp_westwood_ack, .get_info = tcp_westwood_info, .pkts_acked = tcp_westwood_pkts_acked, .owner = THIS_MODULE, .name = "westwood" }; static int __init tcp_westwood_register(void) { BUILD_BUG_ON(sizeof(struct westwood) > ICSK_CA_PRIV_SIZE); return tcp_register_congestion_control(&tcp_westwood); } static void __exit tcp_westwood_unregister(void) { tcp_unregister_congestion_control(&tcp_westwood); } module_init(tcp_westwood_register); module_exit(tcp_westwood_unregister); MODULE_AUTHOR("Stephen Hemminger, Angelo Dell'Aera"); MODULE_LICENSE("GPL"); MODULE_DESCRIPTION("TCP Westwood+"); |
11 11 8 8 8 1 8 1 7 7 8 8 8 8 8 8 8 8 8 8 8 8 5 5 5 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 | // SPDX-License-Identifier: GPL-2.0-or-later /* * mmap.c * * Code to deal with the mess that is clustered mmap. * * Copyright (C) 2002, 2004 Oracle. All rights reserved. */ #include <linux/fs.h> #include <linux/types.h> #include <linux/highmem.h> #include <linux/pagemap.h> #include <linux/uio.h> #include <linux/signal.h> #include <linux/rbtree.h> #include <cluster/masklog.h> #include "ocfs2.h" #include "aops.h" #include "dlmglue.h" #include "file.h" #include "inode.h" #include "mmap.h" #include "super.h" #include "ocfs2_trace.h" static vm_fault_t ocfs2_fault(struct vm_fault *vmf) { struct vm_area_struct *vma = vmf->vma; sigset_t oldset; vm_fault_t ret; ocfs2_block_signals(&oldset); ret = filemap_fault(vmf); ocfs2_unblock_signals(&oldset); trace_ocfs2_fault(OCFS2_I(vma->vm_file->f_mapping->host)->ip_blkno, vma, vmf->page, vmf->pgoff); return ret; } static vm_fault_t __ocfs2_page_mkwrite(struct file *file, struct buffer_head *di_bh, struct folio *folio) { int err; vm_fault_t ret = VM_FAULT_NOPAGE; struct inode *inode = file_inode(file); struct address_space *mapping = inode->i_mapping; loff_t pos = folio_pos(folio); unsigned int len = PAGE_SIZE; pgoff_t last_index; struct folio *locked_folio = NULL; void *fsdata; loff_t size = i_size_read(inode); last_index = (size - 1) >> PAGE_SHIFT; /* * There are cases that lead to the page no longer belonging to the * mapping. * 1) pagecache truncates locally due to memory pressure. * 2) pagecache truncates when another is taking EX lock against * inode lock. see ocfs2_data_convert_worker. * * The i_size check doesn't catch the case where nodes truncated and * then re-extended the file. We'll re-check the page mapping after * taking the page lock inside of ocfs2_write_begin_nolock(). * * Let VM retry with these cases. */ if ((folio->mapping != inode->i_mapping) || !folio_test_uptodate(folio) || (pos >= size)) goto out; /* * Call ocfs2_write_begin() and ocfs2_write_end() to take * advantage of the allocation code there. We pass a write * length of the whole page (chopped to i_size) to make sure * the whole thing is allocated. * * Since we know the page is up to date, we don't have to * worry about ocfs2_write_begin() skipping some buffer reads * because the "write" would invalidate their data. */ if (folio->index == last_index) len = ((size - 1) & ~PAGE_MASK) + 1; err = ocfs2_write_begin_nolock(mapping, pos, len, OCFS2_WRITE_MMAP, &locked_folio, &fsdata, di_bh, folio); if (err) { if (err != -ENOSPC) mlog_errno(err); ret = vmf_error(err); goto out; } if (!locked_folio) { ret = VM_FAULT_NOPAGE; goto out; } err = ocfs2_write_end_nolock(mapping, pos, len, len, fsdata); BUG_ON(err != len); ret = VM_FAULT_LOCKED; out: return ret; } static vm_fault_t ocfs2_page_mkwrite(struct vm_fault *vmf) { struct folio *folio = page_folio(vmf->page); struct inode *inode = file_inode(vmf->vma->vm_file); struct buffer_head *di_bh = NULL; sigset_t oldset; int err; vm_fault_t ret; sb_start_pagefault(inode->i_sb); ocfs2_block_signals(&oldset); /* * The cluster locks taken will block a truncate from another * node. Taking the data lock will also ensure that we don't * attempt page truncation as part of a downconvert. */ err = ocfs2_inode_lock(inode, &di_bh, 1); if (err < 0) { mlog_errno(err); ret = vmf_error(err); goto out; } /* * The alloc sem should be enough to serialize with * ocfs2_truncate_file() changing i_size as well as any thread * modifying the inode btree. */ down_write(&OCFS2_I(inode)->ip_alloc_sem); ret = __ocfs2_page_mkwrite(vmf->vma->vm_file, di_bh, folio); up_write(&OCFS2_I(inode)->ip_alloc_sem); brelse(di_bh); ocfs2_inode_unlock(inode, 1); out: ocfs2_unblock_signals(&oldset); sb_end_pagefault(inode->i_sb); return ret; } static const struct vm_operations_struct ocfs2_file_vm_ops = { .fault = ocfs2_fault, .page_mkwrite = ocfs2_page_mkwrite, }; int ocfs2_mmap(struct file *file, struct vm_area_struct *vma) { int ret = 0, lock_level = 0; ret = ocfs2_inode_lock_atime(file_inode(file), file->f_path.mnt, &lock_level, 1); if (ret < 0) { mlog_errno(ret); goto out; } ocfs2_inode_unlock(file_inode(file), lock_level); out: vma->vm_ops = &ocfs2_file_vm_ops; return 0; } |
228 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 | /* SPDX-License-Identifier: GPL-2.0-or-later */ /* * include/net/switchdev.h - Switch device API * Copyright (c) 2014-2015 Jiri Pirko <jiri@resnulli.us> * Copyright (c) 2014-2015 Scott Feldman <sfeldma@gmail.com> */ #ifndef _LINUX_SWITCHDEV_H_ #define _LINUX_SWITCHDEV_H_ #include <linux/netdevice.h> #include <linux/notifier.h> #include <linux/list.h> #include <net/ip_fib.h> #define SWITCHDEV_F_NO_RECURSE BIT(0) #define SWITCHDEV_F_SKIP_EOPNOTSUPP BIT(1) #define SWITCHDEV_F_DEFER BIT(2) enum switchdev_attr_id { SWITCHDEV_ATTR_ID_UNDEFINED, SWITCHDEV_ATTR_ID_PORT_STP_STATE, SWITCHDEV_ATTR_ID_PORT_MST_STATE, SWITCHDEV_ATTR_ID_PORT_BRIDGE_FLAGS, SWITCHDEV_ATTR_ID_PORT_PRE_BRIDGE_FLAGS, SWITCHDEV_ATTR_ID_PORT_MROUTER, SWITCHDEV_ATTR_ID_BRIDGE_AGEING_TIME, SWITCHDEV_ATTR_ID_BRIDGE_VLAN_FILTERING, SWITCHDEV_ATTR_ID_BRIDGE_VLAN_PROTOCOL, SWITCHDEV_ATTR_ID_BRIDGE_MC_DISABLED, SWITCHDEV_ATTR_ID_BRIDGE_MROUTER, SWITCHDEV_ATTR_ID_BRIDGE_MST, SWITCHDEV_ATTR_ID_MRP_PORT_ROLE, SWITCHDEV_ATTR_ID_VLAN_MSTI, }; struct switchdev_mst_state { u16 msti; u8 state; }; struct switchdev_brport_flags { unsigned long val; unsigned long mask; }; struct switchdev_vlan_msti { u16 vid; u16 msti; }; struct switchdev_attr { struct net_device *orig_dev; enum switchdev_attr_id id; u32 flags; void *complete_priv; void (*complete)(struct net_device *dev, int err, void *priv); union { u8 stp_state; /* PORT_STP_STATE */ struct switchdev_mst_state mst_state; /* PORT_MST_STATE */ struct switchdev_brport_flags brport_flags; /* PORT_BRIDGE_FLAGS */ bool mrouter; /* PORT_MROUTER */ clock_t ageing_time; /* BRIDGE_AGEING_TIME */ bool vlan_filtering; /* BRIDGE_VLAN_FILTERING */ u16 vlan_protocol; /* BRIDGE_VLAN_PROTOCOL */ bool mst; /* BRIDGE_MST */ bool mc_disabled; /* MC_DISABLED */ u8 mrp_port_role; /* MRP_PORT_ROLE */ struct switchdev_vlan_msti vlan_msti; /* VLAN_MSTI */ } u; }; enum switchdev_obj_id { SWITCHDEV_OBJ_ID_UNDEFINED, SWITCHDEV_OBJ_ID_PORT_VLAN, SWITCHDEV_OBJ_ID_PORT_MDB, SWITCHDEV_OBJ_ID_HOST_MDB, SWITCHDEV_OBJ_ID_MRP, SWITCHDEV_OBJ_ID_RING_TEST_MRP, SWITCHDEV_OBJ_ID_RING_ROLE_MRP, SWITCHDEV_OBJ_ID_RING_STATE_MRP, SWITCHDEV_OBJ_ID_IN_TEST_MRP, SWITCHDEV_OBJ_ID_IN_ROLE_MRP, SWITCHDEV_OBJ_ID_IN_STATE_MRP, }; struct switchdev_obj { struct list_head list; struct net_device *orig_dev; enum switchdev_obj_id id; u32 flags; void *complete_priv; void (*complete)(struct net_device *dev, int err, void *priv); }; /* SWITCHDEV_OBJ_ID_PORT_VLAN */ struct switchdev_obj_port_vlan { struct switchdev_obj obj; u16 flags; u16 vid; /* If set, the notifier signifies a change of one of the following * flags for a VLAN that already exists: * - BRIDGE_VLAN_INFO_PVID * - BRIDGE_VLAN_INFO_UNTAGGED * Entries with BRIDGE_VLAN_INFO_BRENTRY unset are not notified at all. */ bool changed; }; #define SWITCHDEV_OBJ_PORT_VLAN(OBJ) \ container_of((OBJ), struct switchdev_obj_port_vlan, obj) /* SWITCHDEV_OBJ_ID_PORT_MDB */ struct switchdev_obj_port_mdb { struct switchdev_obj obj; unsigned char addr[ETH_ALEN]; u16 vid; }; #define SWITCHDEV_OBJ_PORT_MDB(OBJ) \ container_of((OBJ), struct switchdev_obj_port_mdb, obj) /* SWITCHDEV_OBJ_ID_MRP */ struct switchdev_obj_mrp { struct switchdev_obj obj; struct net_device *p_port; struct net_device *s_port; u32 ring_id; u16 prio; }; #define SWITCHDEV_OBJ_MRP(OBJ) \ container_of((OBJ), struct switchdev_obj_mrp, obj) /* SWITCHDEV_OBJ_ID_RING_TEST_MRP */ struct switchdev_obj_ring_test_mrp { struct switchdev_obj obj; /* The value is in us and a value of 0 represents to stop */ u32 interval; u8 max_miss; u32 ring_id; u32 period; bool monitor; }; #define SWITCHDEV_OBJ_RING_TEST_MRP(OBJ) \ container_of((OBJ), struct switchdev_obj_ring_test_mrp, obj) /* SWICHDEV_OBJ_ID_RING_ROLE_MRP */ struct switchdev_obj_ring_role_mrp { struct switchdev_obj obj; u8 ring_role; u32 ring_id; u8 sw_backup; }; #define SWITCHDEV_OBJ_RING_ROLE_MRP(OBJ) \ container_of((OBJ), struct switchdev_obj_ring_role_mrp, obj) struct switchdev_obj_ring_state_mrp { struct switchdev_obj obj; u8 ring_state; u32 ring_id; }; #define SWITCHDEV_OBJ_RING_STATE_MRP(OBJ) \ container_of((OBJ), struct switchdev_obj_ring_state_mrp, obj) /* SWITCHDEV_OBJ_ID_IN_TEST_MRP */ struct switchdev_obj_in_test_mrp { struct switchdev_obj obj; /* The value is in us and a value of 0 represents to stop */ u32 interval; u32 in_id; u32 period; u8 max_miss; }; #define SWITCHDEV_OBJ_IN_TEST_MRP(OBJ) \ container_of((OBJ), struct switchdev_obj_in_test_mrp, obj) /* SWICHDEV_OBJ_ID_IN_ROLE_MRP */ struct switchdev_obj_in_role_mrp { struct switchdev_obj obj; struct net_device *i_port; u32 ring_id; u16 in_id; u8 in_role; u8 sw_backup; }; #define SWITCHDEV_OBJ_IN_ROLE_MRP(OBJ) \ container_of((OBJ), struct switchdev_obj_in_role_mrp, obj) struct switchdev_obj_in_state_mrp { struct switchdev_obj obj; u32 in_id; u8 in_state; }; #define SWITCHDEV_OBJ_IN_STATE_MRP(OBJ) \ container_of((OBJ), struct switchdev_obj_in_state_mrp, obj) struct switchdev_brport { struct net_device *dev; const void *ctx; struct notifier_block *atomic_nb; struct notifier_block *blocking_nb; bool tx_fwd_offload; }; enum switchdev_notifier_type { SWITCHDEV_FDB_ADD_TO_BRIDGE = 1, SWITCHDEV_FDB_DEL_TO_BRIDGE, SWITCHDEV_FDB_ADD_TO_DEVICE, SWITCHDEV_FDB_DEL_TO_DEVICE, SWITCHDEV_FDB_OFFLOADED, SWITCHDEV_FDB_FLUSH_TO_BRIDGE, SWITCHDEV_PORT_OBJ_ADD, /* Blocking. */ SWITCHDEV_PORT_OBJ_DEL, /* Blocking. */ SWITCHDEV_PORT_ATTR_SET, /* May be blocking . */ SWITCHDEV_VXLAN_FDB_ADD_TO_BRIDGE, SWITCHDEV_VXLAN_FDB_DEL_TO_BRIDGE, SWITCHDEV_VXLAN_FDB_ADD_TO_DEVICE, SWITCHDEV_VXLAN_FDB_DEL_TO_DEVICE, SWITCHDEV_VXLAN_FDB_OFFLOADED, SWITCHDEV_BRPORT_OFFLOADED, SWITCHDEV_BRPORT_UNOFFLOADED, SWITCHDEV_BRPORT_REPLAY, }; struct switchdev_notifier_info { struct net_device *dev; struct netlink_ext_ack *extack; const void *ctx; }; /* Remember to update br_switchdev_fdb_populate() when adding * new members to this structure */ struct switchdev_notifier_fdb_info { struct switchdev_notifier_info info; /* must be first */ const unsigned char *addr; u16 vid; u8 added_by_user:1, is_local:1, locked:1, offloaded:1; }; struct switchdev_notifier_port_obj_info { struct switchdev_notifier_info info; /* must be first */ const struct switchdev_obj *obj; bool handled; }; struct switchdev_notifier_port_attr_info { struct switchdev_notifier_info info; /* must be first */ const struct switchdev_attr *attr; bool handled; }; struct switchdev_notifier_brport_info { struct switchdev_notifier_info info; /* must be first */ const struct switchdev_brport brport; }; static inline struct net_device * switchdev_notifier_info_to_dev(const struct switchdev_notifier_info *info) { return info->dev; } static inline struct netlink_ext_ack * switchdev_notifier_info_to_extack(const struct switchdev_notifier_info *info) { return info->extack; } static inline bool switchdev_fdb_is_dynamically_learned(const struct switchdev_notifier_fdb_info *fdb_info) { return !fdb_info->added_by_user && !fdb_info->is_local; } #ifdef CONFIG_NET_SWITCHDEV int switchdev_bridge_port_offload(struct net_device *brport_dev, struct net_device *dev, const void *ctx, struct notifier_block *atomic_nb, struct notifier_block *blocking_nb, bool tx_fwd_offload, struct netlink_ext_ack *extack); void switchdev_bridge_port_unoffload(struct net_device *brport_dev, const void *ctx, struct notifier_block *atomic_nb, struct notifier_block *blocking_nb); int switchdev_bridge_port_replay(struct net_device *brport_dev, struct net_device *dev, const void *ctx, struct notifier_block *atomic_nb, struct notifier_block *blocking_nb, struct netlink_ext_ack *extack); void switchdev_deferred_process(void); int switchdev_port_attr_set(struct net_device *dev, const struct switchdev_attr *attr, struct netlink_ext_ack *extack); bool switchdev_port_obj_act_is_deferred(struct net_device *dev, enum switchdev_notifier_type nt, const struct switchdev_obj *obj); int switchdev_port_obj_add(struct net_device *dev, const struct switchdev_obj *obj, struct netlink_ext_ack *extack); int switchdev_port_obj_del(struct net_device *dev, const struct switchdev_obj *obj); int register_switchdev_notifier(struct notifier_block *nb); int unregister_switchdev_notifier(struct notifier_block *nb); int call_switchdev_notifiers(unsigned long val, struct net_device *dev, struct switchdev_notifier_info *info, struct netlink_ext_ack *extack); int register_switchdev_blocking_notifier(struct notifier_block *nb); int unregister_switchdev_blocking_notifier(struct notifier_block *nb); int call_switchdev_blocking_notifiers(unsigned long val, struct net_device *dev, struct switchdev_notifier_info *info, struct netlink_ext_ack *extack); int switchdev_handle_fdb_event_to_device(struct net_device *dev, unsigned long event, const struct switchdev_notifier_fdb_info *fdb_info, bool (*check_cb)(const struct net_device *dev), bool (*foreign_dev_check_cb)(const struct net_device *dev, const struct net_device *foreign_dev), int (*mod_cb)(struct net_device *dev, struct net_device *orig_dev, unsigned long event, const void *ctx, const struct switchdev_notifier_fdb_info *fdb_info)); int switchdev_handle_port_obj_add(struct net_device *dev, struct switchdev_notifier_port_obj_info *port_obj_info, bool (*check_cb)(const struct net_device *dev), int (*add_cb)(struct net_device *dev, const void *ctx, const struct switchdev_obj *obj, struct netlink_ext_ack *extack)); int switchdev_handle_port_obj_add_foreign(struct net_device *dev, struct switchdev_notifier_port_obj_info *port_obj_info, bool (*check_cb)(const struct net_device *dev), bool (*foreign_dev_check_cb)(const struct net_device *dev, const struct net_device *foreign_dev), int (*add_cb)(struct net_device *dev, const void *ctx, const struct switchdev_obj *obj, struct netlink_ext_ack *extack)); int switchdev_handle_port_obj_del(struct net_device *dev, struct switchdev_notifier_port_obj_info *port_obj_info, bool (*check_cb)(const struct net_device *dev), int (*del_cb)(struct net_device *dev, const void *ctx, const struct switchdev_obj *obj)); int switchdev_handle_port_obj_del_foreign(struct net_device *dev, struct switchdev_notifier_port_obj_info *port_obj_info, bool (*check_cb)(const struct net_device *dev), bool (*foreign_dev_check_cb)(const struct net_device *dev, const struct net_device *foreign_dev), int (*del_cb)(struct net_device *dev, const void *ctx, const struct switchdev_obj *obj)); int switchdev_handle_port_attr_set(struct net_device *dev, struct switchdev_notifier_port_attr_info *port_attr_info, bool (*check_cb)(const struct net_device *dev), int (*set_cb)(struct net_device *dev, const void *ctx, const struct switchdev_attr *attr, struct netlink_ext_ack *extack)); #else static inline int switchdev_bridge_port_offload(struct net_device *brport_dev, struct net_device *dev, const void *ctx, struct notifier_block *atomic_nb, struct notifier_block *blocking_nb, bool tx_fwd_offload, struct netlink_ext_ack *extack) { return -EOPNOTSUPP; } static inline void switchdev_bridge_port_unoffload(struct net_device *brport_dev, const void *ctx, struct notifier_block *atomic_nb, struct notifier_block *blocking_nb) { } static inline void switchdev_deferred_process(void) { } static inline int switchdev_port_attr_set(struct net_device *dev, const struct switchdev_attr *attr, struct netlink_ext_ack *extack) { return -EOPNOTSUPP; } static inline int switchdev_port_obj_add(struct net_device *dev, const struct switchdev_obj *obj, struct netlink_ext_ack *extack) { return -EOPNOTSUPP; } static inline int switchdev_port_obj_del(struct net_device *dev, const struct switchdev_obj *obj) { return -EOPNOTSUPP; } static inline int register_switchdev_notifier(struct notifier_block *nb) { return 0; } static inline int unregister_switchdev_notifier(struct notifier_block *nb) { return 0; } static inline int call_switchdev_notifiers(unsigned long val, struct net_device *dev, struct switchdev_notifier_info *info, struct netlink_ext_ack *extack) { return NOTIFY_DONE; } static inline int register_switchdev_blocking_notifier(struct notifier_block *nb) { return 0; } static inline int unregister_switchdev_blocking_notifier(struct notifier_block *nb) { return 0; } static inline int call_switchdev_blocking_notifiers(unsigned long val, struct net_device *dev, struct switchdev_notifier_info *info, struct netlink_ext_ack *extack) { return NOTIFY_DONE; } static inline int switchdev_handle_fdb_event_to_device(struct net_device *dev, unsigned long event, const struct switchdev_notifier_fdb_info *fdb_info, bool (*check_cb)(const struct net_device *dev), bool (*foreign_dev_check_cb)(const struct net_device *dev, const struct net_device *foreign_dev), int (*mod_cb)(struct net_device *dev, struct net_device *orig_dev, unsigned long event, const void *ctx, const struct switchdev_notifier_fdb_info *fdb_info)) { return 0; } static inline int switchdev_handle_port_obj_add(struct net_device *dev, struct switchdev_notifier_port_obj_info *port_obj_info, bool (*check_cb)(const struct net_device *dev), int (*add_cb)(struct net_device *dev, const void *ctx, const struct switchdev_obj *obj, struct netlink_ext_ack *extack)) { return 0; } static inline int switchdev_handle_port_obj_add_foreign(struct net_device *dev, struct switchdev_notifier_port_obj_info *port_obj_info, bool (*check_cb)(const struct net_device *dev), bool (*foreign_dev_check_cb)(const struct net_device *dev, const struct net_device *foreign_dev), int (*add_cb)(struct net_device *dev, const void *ctx, const struct switchdev_obj *obj, struct netlink_ext_ack *extack)) { return 0; } static inline int switchdev_handle_port_obj_del(struct net_device *dev, struct switchdev_notifier_port_obj_info *port_obj_info, bool (*check_cb)(const struct net_device *dev), int (*del_cb)(struct net_device *dev, const void *ctx, const struct switchdev_obj *obj)) { return 0; } static inline int switchdev_handle_port_obj_del_foreign(struct net_device *dev, struct switchdev_notifier_port_obj_info *port_obj_info, bool (*check_cb)(const struct net_device *dev), bool (*foreign_dev_check_cb)(const struct net_device *dev, const struct net_device *foreign_dev), int (*del_cb)(struct net_device *dev, const void *ctx, const struct switchdev_obj *obj)) { return 0; } static inline int switchdev_handle_port_attr_set(struct net_device *dev, struct switchdev_notifier_port_attr_info *port_attr_info, bool (*check_cb)(const struct net_device *dev), int (*set_cb)(struct net_device *dev, const void *ctx, const struct switchdev_attr *attr, struct netlink_ext_ack *extack)) { return 0; } #endif #endif /* _LINUX_SWITCHDEV_H_ */ |
4418 1745 420 225 458 2065 2067 458 457 458 458 2074 2068 458 2075 458 458 458 458 457 2068 2074 2075 313 285 285 313 143 19 18 142 24 25 25 10 10 10 10 3 3 3 3 10 10 252 253 253 253 3019 3018 3012 3014 97 3019 194 195 5542 5545 5545 118 118 118 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 | // SPDX-License-Identifier: GPL-2.0 #include <linux/mm.h> #include <linux/gfp.h> #include <linux/hugetlb.h> #include <asm/pgalloc.h> #include <asm/tlb.h> #include <asm/fixmap.h> #include <asm/mtrr.h> #ifdef CONFIG_DYNAMIC_PHYSICAL_MASK phys_addr_t physical_mask __ro_after_init = (1ULL << __PHYSICAL_MASK_SHIFT) - 1; EXPORT_SYMBOL(physical_mask); #endif pgtable_t pte_alloc_one(struct mm_struct *mm) { return __pte_alloc_one(mm, GFP_PGTABLE_USER); } void ___pte_free_tlb(struct mmu_gather *tlb, struct page *pte) { paravirt_release_pte(page_to_pfn(pte)); tlb_remove_ptdesc(tlb, page_ptdesc(pte)); } #if CONFIG_PGTABLE_LEVELS > 2 void ___pmd_free_tlb(struct mmu_gather *tlb, pmd_t *pmd) { paravirt_release_pmd(__pa(pmd) >> PAGE_SHIFT); /* * NOTE! For PAE, any changes to the top page-directory-pointer-table * entries need a full cr3 reload to flush. */ #ifdef CONFIG_X86_PAE tlb->need_flush_all = 1; #endif tlb_remove_ptdesc(tlb, virt_to_ptdesc(pmd)); } #if CONFIG_PGTABLE_LEVELS > 3 void ___pud_free_tlb(struct mmu_gather *tlb, pud_t *pud) { paravirt_release_pud(__pa(pud) >> PAGE_SHIFT); tlb_remove_ptdesc(tlb, virt_to_ptdesc(pud)); } #if CONFIG_PGTABLE_LEVELS > 4 void ___p4d_free_tlb(struct mmu_gather *tlb, p4d_t *p4d) { paravirt_release_p4d(__pa(p4d) >> PAGE_SHIFT); tlb_remove_ptdesc(tlb, virt_to_ptdesc(p4d)); } #endif /* CONFIG_PGTABLE_LEVELS > 4 */ #endif /* CONFIG_PGTABLE_LEVELS > 3 */ #endif /* CONFIG_PGTABLE_LEVELS > 2 */ static inline void pgd_list_add(pgd_t *pgd) { struct ptdesc *ptdesc = virt_to_ptdesc(pgd); list_add(&ptdesc->pt_list, &pgd_list); } static inline void pgd_list_del(pgd_t *pgd) { struct ptdesc *ptdesc = virt_to_ptdesc(pgd); list_del(&ptdesc->pt_list); } #define UNSHARED_PTRS_PER_PGD \ (SHARED_KERNEL_PMD ? KERNEL_PGD_BOUNDARY : PTRS_PER_PGD) #define MAX_UNSHARED_PTRS_PER_PGD \ MAX_T(size_t, KERNEL_PGD_BOUNDARY, PTRS_PER_PGD) static void pgd_set_mm(pgd_t *pgd, struct mm_struct *mm) { virt_to_ptdesc(pgd)->pt_mm = mm; } struct mm_struct *pgd_page_get_mm(struct page *page) { return page_ptdesc(page)->pt_mm; } static void pgd_ctor(struct mm_struct *mm, pgd_t *pgd) { /* If the pgd points to a shared pagetable level (either the ptes in non-PAE, or shared PMD in PAE), then just copy the references from swapper_pg_dir. */ if (CONFIG_PGTABLE_LEVELS == 2 || (CONFIG_PGTABLE_LEVELS == 3 && SHARED_KERNEL_PMD) || CONFIG_PGTABLE_LEVELS >= 4) { clone_pgd_range(pgd + KERNEL_PGD_BOUNDARY, swapper_pg_dir + KERNEL_PGD_BOUNDARY, KERNEL_PGD_PTRS); } /* list required to sync kernel mapping updates */ if (!SHARED_KERNEL_PMD) { pgd_set_mm(pgd, mm); pgd_list_add(pgd); } } static void pgd_dtor(pgd_t *pgd) { if (SHARED_KERNEL_PMD) return; spin_lock(&pgd_lock); pgd_list_del(pgd); spin_unlock(&pgd_lock); } /* * List of all pgd's needed for non-PAE so it can invalidate entries * in both cached and uncached pgd's; not needed for PAE since the * kernel pmd is shared. If PAE were not to share the pmd a similar * tactic would be needed. This is essentially codepath-based locking * against pageattr.c; it is the unique case in which a valid change * of kernel pagetables can't be lazily synchronized by vmalloc faults. * vmalloc faults work because attached pagetables are never freed. * -- nyc */ #ifdef CONFIG_X86_PAE /* * In PAE mode, we need to do a cr3 reload (=tlb flush) when * updating the top-level pagetable entries to guarantee the * processor notices the update. Since this is expensive, and * all 4 top-level entries are used almost immediately in a * new process's life, we just pre-populate them here. * * Also, if we're in a paravirt environment where the kernel pmd is * not shared between pagetables (!SHARED_KERNEL_PMDS), we allocate * and initialize the kernel pmds here. */ #define PREALLOCATED_PMDS UNSHARED_PTRS_PER_PGD #define MAX_PREALLOCATED_PMDS MAX_UNSHARED_PTRS_PER_PGD /* * We allocate separate PMDs for the kernel part of the user page-table * when PTI is enabled. We need them to map the per-process LDT into the * user-space page-table. */ #define PREALLOCATED_USER_PMDS (boot_cpu_has(X86_FEATURE_PTI) ? \ KERNEL_PGD_PTRS : 0) #define MAX_PREALLOCATED_USER_PMDS KERNEL_PGD_PTRS void pud_populate(struct mm_struct *mm, pud_t *pudp, pmd_t *pmd) { paravirt_alloc_pmd(mm, __pa(pmd) >> PAGE_SHIFT); /* Note: almost everything apart from _PAGE_PRESENT is reserved at the pmd (PDPT) level. */ set_pud(pudp, __pud(__pa(pmd) | _PAGE_PRESENT)); /* * According to Intel App note "TLBs, Paging-Structure Caches, * and Their Invalidation", April 2007, document 317080-001, * section 8.1: in PAE mode we explicitly have to flush the * TLB via cr3 if the top-level pgd is changed... */ flush_tlb_mm(mm); } #else /* !CONFIG_X86_PAE */ /* No need to prepopulate any pagetable entries in non-PAE modes. */ #define PREALLOCATED_PMDS 0 #define MAX_PREALLOCATED_PMDS 0 #define PREALLOCATED_USER_PMDS 0 #define MAX_PREALLOCATED_USER_PMDS 0 #endif /* CONFIG_X86_PAE */ static void free_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) { int i; struct ptdesc *ptdesc; for (i = 0; i < count; i++) if (pmds[i]) { ptdesc = virt_to_ptdesc(pmds[i]); pagetable_dtor(ptdesc); pagetable_free(ptdesc); mm_dec_nr_pmds(mm); } } static int preallocate_pmds(struct mm_struct *mm, pmd_t *pmds[], int count) { int i; bool failed = false; gfp_t gfp = GFP_PGTABLE_USER; if (mm == &init_mm) gfp &= ~__GFP_ACCOUNT; gfp &= ~__GFP_HIGHMEM; for (i = 0; i < count; i++) { pmd_t *pmd = NULL; struct ptdesc *ptdesc = pagetable_alloc(gfp, 0); if (!ptdesc) failed = true; if (ptdesc && !pagetable_pmd_ctor(ptdesc)) { pagetable_free(ptdesc); ptdesc = NULL; failed = true; } if (ptdesc) { mm_inc_nr_pmds(mm); pmd = ptdesc_address(ptdesc); } pmds[i] = pmd; } if (failed) { free_pmds(mm, pmds, count); return -ENOMEM; } return 0; } /* * Mop up any pmd pages which may still be attached to the pgd. * Normally they will be freed by munmap/exit_mmap, but any pmd we * preallocate which never got a corresponding vma will need to be * freed manually. */ static void mop_up_one_pmd(struct mm_struct *mm, pgd_t *pgdp) { pgd_t pgd = *pgdp; if (pgd_val(pgd) != 0) { pmd_t *pmd = (pmd_t *)pgd_page_vaddr(pgd); pgd_clear(pgdp); paravirt_release_pmd(pgd_val(pgd) >> PAGE_SHIFT); pmd_free(mm, pmd); mm_dec_nr_pmds(mm); } } static void pgd_mop_up_pmds(struct mm_struct *mm, pgd_t *pgdp) { int i; for (i = 0; i < PREALLOCATED_PMDS; i++) mop_up_one_pmd(mm, &pgdp[i]); #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION if (!boot_cpu_has(X86_FEATURE_PTI)) return; pgdp = kernel_to_user_pgdp(pgdp); for (i = 0; i < PREALLOCATED_USER_PMDS; i++) mop_up_one_pmd(mm, &pgdp[i + KERNEL_PGD_BOUNDARY]); #endif } static void pgd_prepopulate_pmd(struct mm_struct *mm, pgd_t *pgd, pmd_t *pmds[]) { p4d_t *p4d; pud_t *pud; int i; p4d = p4d_offset(pgd, 0); pud = pud_offset(p4d, 0); for (i = 0; i < PREALLOCATED_PMDS; i++, pud++) { pmd_t *pmd = pmds[i]; if (i >= KERNEL_PGD_BOUNDARY) memcpy(pmd, (pmd_t *)pgd_page_vaddr(swapper_pg_dir[i]), sizeof(pmd_t) * PTRS_PER_PMD); pud_populate(mm, pud, pmd); } } #ifdef CONFIG_MITIGATION_PAGE_TABLE_ISOLATION static void pgd_prepopulate_user_pmd(struct mm_struct *mm, pgd_t *k_pgd, pmd_t *pmds[]) { pgd_t *s_pgd = kernel_to_user_pgdp(swapper_pg_dir); pgd_t *u_pgd = kernel_to_user_pgdp(k_pgd); p4d_t *u_p4d; pud_t *u_pud; int i; u_p4d = p4d_offset(u_pgd, 0); u_pud = pud_offset(u_p4d, 0); s_pgd += KERNEL_PGD_BOUNDARY; u_pud += KERNEL_PGD_BOUNDARY; for (i = 0; i < PREALLOCATED_USER_PMDS; i++, u_pud++, s_pgd++) { pmd_t *pmd = pmds[i]; memcpy(pmd, (pmd_t *)pgd_page_vaddr(*s_pgd), sizeof(pmd_t) * PTRS_PER_PMD); pud_populate(mm, u_pud, pmd); } } #else static void pgd_prepopulate_user_pmd(struct mm_struct *mm, pgd_t *k_pgd, pmd_t *pmds[]) { } #endif /* * Xen paravirt assumes pgd table should be in one page. 64 bit kernel also * assumes that pgd should be in one page. * * But kernel with PAE paging that is not running as a Xen domain * only needs to allocate 32 bytes for pgd instead of one page. */ #ifdef CONFIG_X86_PAE #include <linux/slab.h> #define PGD_SIZE (PTRS_PER_PGD * sizeof(pgd_t)) #define PGD_ALIGN 32 static struct kmem_cache *pgd_cache; void __init pgtable_cache_init(void) { /* * When PAE kernel is running as a Xen domain, it does not use * shared kernel pmd. And this requires a whole page for pgd. */ if (!SHARED_KERNEL_PMD) return; /* * when PAE kernel is not running as a Xen domain, it uses * shared kernel pmd. Shared kernel pmd does not require a whole * page for pgd. We are able to just allocate a 32-byte for pgd. * During boot time, we create a 32-byte slab for pgd table allocation. */ pgd_cache = kmem_cache_create("pgd_cache", PGD_SIZE, PGD_ALIGN, SLAB_PANIC, NULL); } static inline pgd_t *_pgd_alloc(struct mm_struct *mm) { /* * If no SHARED_KERNEL_PMD, PAE kernel is running as a Xen domain. * We allocate one page for pgd. */ if (!SHARED_KERNEL_PMD) return __pgd_alloc(mm, PGD_ALLOCATION_ORDER); /* * Now PAE kernel is not running as a Xen domain. We can allocate * a 32-byte slab for pgd to save memory space. */ return kmem_cache_alloc(pgd_cache, GFP_PGTABLE_USER); } static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd) { if (!SHARED_KERNEL_PMD) __pgd_free(mm, pgd); else kmem_cache_free(pgd_cache, pgd); } #else static inline pgd_t *_pgd_alloc(struct mm_struct *mm) { return __pgd_alloc(mm, PGD_ALLOCATION_ORDER); } static inline void _pgd_free(struct mm_struct *mm, pgd_t *pgd) { __pgd_free(mm, pgd); } #endif /* CONFIG_X86_PAE */ pgd_t *pgd_alloc(struct mm_struct *mm) { pgd_t *pgd; pmd_t *u_pmds[MAX_PREALLOCATED_USER_PMDS]; pmd_t *pmds[MAX_PREALLOCATED_PMDS]; pgd = _pgd_alloc(mm); if (pgd == NULL) goto out; mm->pgd = pgd; if (sizeof(pmds) != 0 && preallocate_pmds(mm, pmds, PREALLOCATED_PMDS) != 0) goto out_free_pgd; if (sizeof(u_pmds) != 0 && preallocate_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS) != 0) goto out_free_pmds; if (paravirt_pgd_alloc(mm) != 0) goto out_free_user_pmds; /* * Make sure that pre-populating the pmds is atomic with * respect to anything walking the pgd_list, so that they * never see a partially populated pgd. */ spin_lock(&pgd_lock); pgd_ctor(mm, pgd); if (sizeof(pmds) != 0) pgd_prepopulate_pmd(mm, pgd, pmds); if (sizeof(u_pmds) != 0) pgd_prepopulate_user_pmd(mm, pgd, u_pmds); spin_unlock(&pgd_lock); return pgd; out_free_user_pmds: if (sizeof(u_pmds) != 0) free_pmds(mm, u_pmds, PREALLOCATED_USER_PMDS); out_free_pmds: if (sizeof(pmds) != 0) free_pmds(mm, pmds, PREALLOCATED_PMDS); out_free_pgd: _pgd_free(mm, pgd); out: return NULL; } void pgd_free(struct mm_struct *mm, pgd_t *pgd) { pgd_mop_up_pmds(mm, pgd); pgd_dtor(pgd); paravirt_pgd_free(mm, pgd); _pgd_free(mm, pgd); } /* * Used to set accessed or dirty bits in the page table entries * on other architectures. On x86, the accessed and dirty bits * are tracked by hardware. However, do_wp_page calls this function * to also make the pte writeable at the same time the dirty bit is * set. In that case we do actually need to write the PTE. */ int ptep_set_access_flags(struct vm_area_struct *vma, unsigned long address, pte_t *ptep, pte_t entry, int dirty) { int changed = !pte_same(*ptep, entry); if (changed && dirty) set_pte(ptep, entry); return changed; } #ifdef CONFIG_TRANSPARENT_HUGEPAGE int pmdp_set_access_flags(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp, pmd_t entry, int dirty) { int changed = !pmd_same(*pmdp, entry); VM_BUG_ON(address & ~HPAGE_PMD_MASK); if (changed && dirty) { set_pmd(pmdp, entry); /* * We had a write-protection fault here and changed the pmd * to to more permissive. No need to flush the TLB for that, * #PF is architecturally guaranteed to do that and in the * worst-case we'll generate a spurious fault. */ } return changed; } int pudp_set_access_flags(struct vm_area_struct *vma, unsigned long address, pud_t *pudp, pud_t entry, int dirty) { int changed = !pud_same(*pudp, entry); VM_BUG_ON(address & ~HPAGE_PUD_MASK); if (changed && dirty) { set_pud(pudp, entry); /* * We had a write-protection fault here and changed the pud * to to more permissive. No need to flush the TLB for that, * #PF is architecturally guaranteed to do that and in the * worst-case we'll generate a spurious fault. */ } return changed; } #endif int ptep_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pte_t *ptep) { int ret = 0; if (pte_young(*ptep)) ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, (unsigned long *) &ptep->pte); return ret; } #if defined(CONFIG_TRANSPARENT_HUGEPAGE) || defined(CONFIG_ARCH_HAS_NONLEAF_PMD_YOUNG) int pmdp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pmd_t *pmdp) { int ret = 0; if (pmd_young(*pmdp)) ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, (unsigned long *)pmdp); return ret; } #endif #ifdef CONFIG_TRANSPARENT_HUGEPAGE int pudp_test_and_clear_young(struct vm_area_struct *vma, unsigned long addr, pud_t *pudp) { int ret = 0; if (pud_young(*pudp)) ret = test_and_clear_bit(_PAGE_BIT_ACCESSED, (unsigned long *)pudp); return ret; } #endif int ptep_clear_flush_young(struct vm_area_struct *vma, unsigned long address, pte_t *ptep) { /* * On x86 CPUs, clearing the accessed bit without a TLB flush * doesn't cause data corruption. [ It could cause incorrect * page aging and the (mistaken) reclaim of hot pages, but the * chance of that should be relatively low. ] * * So as a performance optimization don't flush the TLB when * clearing the accessed bit, it will eventually be flushed by * a context switch or a VM operation anyway. [ In the rare * event of it not getting flushed for a long time the delay * shouldn't really matter because there's no real memory * pressure for swapout to react to. ] */ return ptep_test_and_clear_young(vma, address, ptep); } #ifdef CONFIG_TRANSPARENT_HUGEPAGE int pmdp_clear_flush_young(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { int young; VM_BUG_ON(address & ~HPAGE_PMD_MASK); young = pmdp_test_and_clear_young(vma, address, pmdp); if (young) flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE); return young; } pmd_t pmdp_invalidate_ad(struct vm_area_struct *vma, unsigned long address, pmd_t *pmdp) { VM_WARN_ON_ONCE(!pmd_present(*pmdp)); /* * No flush is necessary. Once an invalid PTE is established, the PTE's * access and dirty bits cannot be updated. */ return pmdp_establish(vma, address, pmdp, pmd_mkinvalid(*pmdp)); } #endif #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && \ defined(CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD) pud_t pudp_invalidate(struct vm_area_struct *vma, unsigned long address, pud_t *pudp) { VM_WARN_ON_ONCE(!pud_present(*pudp)); pud_t old = pudp_establish(vma, address, pudp, pud_mkinvalid(*pudp)); flush_pud_tlb_range(vma, address, address + HPAGE_PUD_SIZE); return old; } #endif /** * reserve_top_address - reserves a hole in the top of kernel address space * @reserve - size of hole to reserve * * Can be used to relocate the fixmap area and poke a hole in the top * of kernel address space to make room for a hypervisor. */ void __init reserve_top_address(unsigned long reserve) { #ifdef CONFIG_X86_32 BUG_ON(fixmaps_set > 0); __FIXADDR_TOP = round_down(-reserve, 1 << PMD_SHIFT) - PAGE_SIZE; printk(KERN_INFO "Reserving virtual address space above 0x%08lx (rounded to 0x%08lx)\n", -reserve, __FIXADDR_TOP + PAGE_SIZE); #endif } int fixmaps_set; void __native_set_fixmap(enum fixed_addresses idx, pte_t pte) { unsigned long address = __fix_to_virt(idx); #ifdef CONFIG_X86_64 /* * Ensure that the static initial page tables are covering the * fixmap completely. */ BUILD_BUG_ON(__end_of_permanent_fixed_addresses > (FIXMAP_PMD_NUM * PTRS_PER_PTE)); #endif if (idx >= __end_of_fixed_addresses) { BUG(); return; } set_pte_vaddr(address, pte); fixmaps_set++; } void native_set_fixmap(unsigned /* enum fixed_addresses */ idx, phys_addr_t phys, pgprot_t flags) { /* Sanitize 'prot' against any unsupported bits: */ pgprot_val(flags) &= __default_kernel_pte_mask; __native_set_fixmap(idx, pfn_pte(phys >> PAGE_SHIFT, flags)); } #ifdef CONFIG_HAVE_ARCH_HUGE_VMAP #ifdef CONFIG_X86_5LEVEL /** * p4d_set_huge - setup kernel P4D mapping * * No 512GB pages yet -- always return 0 */ int p4d_set_huge(p4d_t *p4d, phys_addr_t addr, pgprot_t prot) { return 0; } /** * p4d_clear_huge - clear kernel P4D mapping when it is set * * No 512GB pages yet -- always return 0 */ void p4d_clear_huge(p4d_t *p4d) { } #endif /** * pud_set_huge - setup kernel PUD mapping * * MTRRs can override PAT memory types with 4KiB granularity. Therefore, this * function sets up a huge page only if the complete range has the same MTRR * caching mode. * * Callers should try to decrease page size (1GB -> 2MB -> 4K) if the bigger * page mapping attempt fails. * * Returns 1 on success and 0 on failure. */ int pud_set_huge(pud_t *pud, phys_addr_t addr, pgprot_t prot) { u8 uniform; mtrr_type_lookup(addr, addr + PUD_SIZE, &uniform); if (!uniform) return 0; /* Bail out if we are we on a populated non-leaf entry: */ if (pud_present(*pud) && !pud_leaf(*pud)) return 0; set_pte((pte_t *)pud, pfn_pte( (u64)addr >> PAGE_SHIFT, __pgprot(protval_4k_2_large(pgprot_val(prot)) | _PAGE_PSE))); return 1; } /** * pmd_set_huge - setup kernel PMD mapping * * See text over pud_set_huge() above. * * Returns 1 on success and 0 on failure. */ int pmd_set_huge(pmd_t *pmd, phys_addr_t addr, pgprot_t prot) { u8 uniform; mtrr_type_lookup(addr, addr + PMD_SIZE, &uniform); if (!uniform) { pr_warn_once("%s: Cannot satisfy [mem %#010llx-%#010llx] with a huge-page mapping due to MTRR override.\n", __func__, addr, addr + PMD_SIZE); return 0; } /* Bail out if we are we on a populated non-leaf entry: */ if (pmd_present(*pmd) && !pmd_leaf(*pmd)) return 0; set_pte((pte_t *)pmd, pfn_pte( (u64)addr >> PAGE_SHIFT, __pgprot(protval_4k_2_large(pgprot_val(prot)) | _PAGE_PSE))); return 1; } /** * pud_clear_huge - clear kernel PUD mapping when it is set * * Returns 1 on success and 0 on failure (no PUD map is found). */ int pud_clear_huge(pud_t *pud) { if (pud_leaf(*pud)) { pud_clear(pud); return 1; } return 0; } /** * pmd_clear_huge - clear kernel PMD mapping when it is set * * Returns 1 on success and 0 on failure (no PMD map is found). */ int pmd_clear_huge(pmd_t *pmd) { if (pmd_leaf(*pmd)) { pmd_clear(pmd); return 1; } return 0; } #ifdef CONFIG_X86_64 /** * pud_free_pmd_page - Clear pud entry and free pmd page. * @pud: Pointer to a PUD. * @addr: Virtual address associated with pud. * * Context: The pud range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. * * NOTE: Callers must allow a single page allocation. */ int pud_free_pmd_page(pud_t *pud, unsigned long addr) { pmd_t *pmd, *pmd_sv; pte_t *pte; int i; pmd = pud_pgtable(*pud); pmd_sv = (pmd_t *)__get_free_page(GFP_KERNEL); if (!pmd_sv) return 0; for (i = 0; i < PTRS_PER_PMD; i++) { pmd_sv[i] = pmd[i]; if (!pmd_none(pmd[i])) pmd_clear(&pmd[i]); } pud_clear(pud); /* INVLPG to clear all paging-structure caches */ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); for (i = 0; i < PTRS_PER_PMD; i++) { if (!pmd_none(pmd_sv[i])) { pte = (pte_t *)pmd_page_vaddr(pmd_sv[i]); free_page((unsigned long)pte); } } free_page((unsigned long)pmd_sv); pagetable_dtor(virt_to_ptdesc(pmd)); free_page((unsigned long)pmd); return 1; } /** * pmd_free_pte_page - Clear pmd entry and free pte page. * @pmd: Pointer to a PMD. * @addr: Virtual address associated with pmd. * * Context: The pmd range has been unmapped and TLB purged. * Return: 1 if clearing the entry succeeded. 0 otherwise. */ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) { pte_t *pte; pte = (pte_t *)pmd_page_vaddr(*pmd); pmd_clear(pmd); /* INVLPG to clear all paging-structure caches */ flush_tlb_kernel_range(addr, addr + PAGE_SIZE-1); free_page((unsigned long)pte); return 1; } #else /* !CONFIG_X86_64 */ /* * Disable free page handling on x86-PAE. This assures that ioremap() * does not update sync'd pmd entries. See vmalloc_sync_one(). */ int pmd_free_pte_page(pmd_t *pmd, unsigned long addr) { return pmd_none(*pmd); } #endif /* CONFIG_X86_64 */ #endif /* CONFIG_HAVE_ARCH_HUGE_VMAP */ pte_t pte_mkwrite(pte_t pte, struct vm_area_struct *vma) { if (vma->vm_flags & VM_SHADOW_STACK) return pte_mkwrite_shstk(pte); pte = pte_mkwrite_novma(pte); return pte_clear_saveddirty(pte); } pmd_t pmd_mkwrite(pmd_t pmd, struct vm_area_struct *vma) { if (vma->vm_flags & VM_SHADOW_STACK) return pmd_mkwrite_shstk(pmd); pmd = pmd_mkwrite_novma(pmd); return pmd_clear_saveddirty(pmd); } void arch_check_zapped_pte(struct vm_area_struct *vma, pte_t pte) { /* * Hardware before shadow stack can (rarely) set Dirty=1 * on a Write=0 PTE. So the below condition * only indicates a software bug when shadow stack is * supported by the HW. This checking is covered in * pte_shstk(). */ VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && pte_shstk(pte)); } void arch_check_zapped_pmd(struct vm_area_struct *vma, pmd_t pmd) { /* See note in arch_check_zapped_pte() */ VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && pmd_shstk(pmd)); } void arch_check_zapped_pud(struct vm_area_struct *vma, pud_t pud) { /* See note in arch_check_zapped_pte() */ VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) && pud_shstk(pud)); } |
313 312 312 313 313 23 23 23 313 313 313 313 312 310 295 1 296 296 295 296 296 296 296 295 295 1 1 296 296 295 10 10 10 2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 | // SPDX-License-Identifier: GPL-2.0-or-later /* * Copyright (c) 2022-2024 Oracle. All Rights Reserved. * Author: Darrick J. Wong <djwong@kernel.org> */ #include "xfs.h" #include "xfs_fs.h" #include "xfs_shared.h" #include "xfs_format.h" #include "xfs_trans_resv.h" #include "xfs_bit.h" #include "xfs_sb.h" #include "xfs_mount.h" #include "xfs_btree.h" #include "xfs_alloc_btree.h" #include "xfs_rmap_btree.h" #include "xfs_alloc.h" #include "xfs_ialloc.h" #include "xfs_rmap.h" #include "xfs_ag.h" #include "xfs_ag_resv.h" #include "xfs_health.h" #include "xfs_error.h" #include "xfs_bmap.h" #include "xfs_defer.h" #include "xfs_log_format.h" #include "xfs_trans.h" #include "xfs_trace.h" #include "xfs_inode.h" #include "xfs_icache.h" #include "xfs_buf_item.h" #include "xfs_rtgroup.h" #include "xfs_rtbitmap.h" #include "xfs_metafile.h" #include "xfs_metadir.h" #include "xfs_rtrmap_btree.h" #include "xfs_rtrefcount_btree.h" /* Find the first usable fsblock in this rtgroup. */ static inline uint32_t xfs_rtgroup_min_block( struct xfs_mount *mp, xfs_rgnumber_t rgno) { if (xfs_has_rtsb(mp) && rgno == 0) return mp->m_sb.sb_rextsize; return 0; } /* Precompute this group's geometry */ void xfs_rtgroup_calc_geometry( struct xfs_mount *mp, struct xfs_rtgroup *rtg, xfs_rgnumber_t rgno, xfs_rgnumber_t rgcount, xfs_rtbxlen_t rextents) { rtg->rtg_extents = __xfs_rtgroup_extents(mp, rgno, rgcount, rextents); rtg_group(rtg)->xg_block_count = rtg->rtg_extents * mp->m_sb.sb_rextsize; rtg_group(rtg)->xg_min_gbno = xfs_rtgroup_min_block(mp, rgno); } int xfs_rtgroup_alloc( struct xfs_mount *mp, xfs_rgnumber_t rgno, xfs_rgnumber_t rgcount, xfs_rtbxlen_t rextents) { struct xfs_rtgroup *rtg; int error; rtg = kzalloc(sizeof(struct xfs_rtgroup), GFP_KERNEL); if (!rtg) return -ENOMEM; xfs_rtgroup_calc_geometry(mp, rtg, rgno, rgcount, rextents); error = xfs_group_insert(mp, rtg_group(rtg), rgno, XG_TYPE_RTG); if (error) goto out_free_rtg; return 0; out_free_rtg: kfree(rtg); return error; } void xfs_rtgroup_free( struct xfs_mount *mp, xfs_rgnumber_t rgno) { xfs_group_free(mp, rgno, XG_TYPE_RTG, NULL); } /* Free a range of incore rtgroup objects. */ void xfs_free_rtgroups( struct xfs_mount *mp, xfs_rgnumber_t first_rgno, xfs_rgnumber_t end_rgno) { xfs_rgnumber_t rgno; for (rgno = first_rgno; rgno < end_rgno; rgno++) xfs_rtgroup_free(mp, rgno); } /* Initialize some range of incore rtgroup objects. */ int xfs_initialize_rtgroups( struct xfs_mount *mp, xfs_rgnumber_t first_rgno, xfs_rgnumber_t end_rgno, xfs_rtbxlen_t rextents) { xfs_rgnumber_t index; int error; if (first_rgno >= end_rgno) return 0; for (index = first_rgno; index < end_rgno; index++) { error = xfs_rtgroup_alloc(mp, index, end_rgno, rextents); if (error) goto out_unwind_new_rtgs; } return 0; out_unwind_new_rtgs: xfs_free_rtgroups(mp, first_rgno, index); return error; } /* Compute the number of rt extents in this realtime group. */ xfs_rtxnum_t __xfs_rtgroup_extents( struct xfs_mount *mp, xfs_rgnumber_t rgno, xfs_rgnumber_t rgcount, xfs_rtbxlen_t rextents) { ASSERT(rgno < rgcount); if (rgno == rgcount - 1) return rextents - ((xfs_rtxnum_t)rgno * mp->m_sb.sb_rgextents); ASSERT(xfs_has_rtgroups(mp)); return mp->m_sb.sb_rgextents; } xfs_rtxnum_t xfs_rtgroup_extents( struct xfs_mount *mp, xfs_rgnumber_t rgno) { return __xfs_rtgroup_extents(mp, rgno, mp->m_sb.sb_rgcount, mp->m_sb.sb_rextents); } /* * Update the rt extent count of the previous tail rtgroup if it changed during * recovery (i.e. recovery of a growfs). */ int xfs_update_last_rtgroup_size( struct xfs_mount *mp, xfs_rgnumber_t prev_rgcount) { struct xfs_rtgroup *rtg; ASSERT(prev_rgcount > 0); rtg = xfs_rtgroup_grab(mp, prev_rgcount - 1); if (!rtg) return -EFSCORRUPTED; rtg->rtg_extents = __xfs_rtgroup_extents(mp, prev_rgcount - 1, mp->m_sb.sb_rgcount, mp->m_sb.sb_rextents); rtg_group(rtg)->xg_block_count = rtg->rtg_extents * mp->m_sb.sb_rextsize; xfs_rtgroup_rele(rtg); return 0; } /* Lock metadata inodes associated with this rt group. */ void xfs_rtgroup_lock( struct xfs_rtgroup *rtg, unsigned int rtglock_flags) { ASSERT(!(rtglock_flags & ~XFS_RTGLOCK_ALL_FLAGS)); ASSERT(!(rtglock_flags & XFS_RTGLOCK_BITMAP_SHARED) || !(rtglock_flags & XFS_RTGLOCK_BITMAP)); if (!xfs_has_zoned(rtg_mount(rtg))) { if (rtglock_flags & XFS_RTGLOCK_BITMAP) { /* * Lock both realtime free space metadata inodes for a * freespace update. */ xfs_ilock(rtg_bitmap(rtg), XFS_ILOCK_EXCL); xfs_ilock(rtg_summary(rtg), XFS_ILOCK_EXCL); } else if (rtglock_flags & XFS_RTGLOCK_BITMAP_SHARED) { xfs_ilock(rtg_bitmap(rtg), XFS_ILOCK_SHARED); } } if ((rtglock_flags & XFS_RTGLOCK_RMAP) && rtg_rmap(rtg)) xfs_ilock(rtg_rmap(rtg), XFS_ILOCK_EXCL); if ((rtglock_flags & XFS_RTGLOCK_REFCOUNT) && rtg_refcount(rtg)) xfs_ilock(rtg_refcount(rtg), XFS_ILOCK_EXCL); } /* Unlock metadata inodes associated with this rt group. */ void xfs_rtgroup_unlock( struct xfs_rtgroup *rtg, unsigned int rtglock_flags) { ASSERT(!(rtglock_flags & ~XFS_RTGLOCK_ALL_FLAGS)); ASSERT(!(rtglock_flags & XFS_RTGLOCK_BITMAP_SHARED) || !(rtglock_flags & XFS_RTGLOCK_BITMAP)); if ((rtglock_flags & XFS_RTGLOCK_REFCOUNT) && rtg_refcount(rtg)) xfs_iunlock(rtg_refcount(rtg), XFS_ILOCK_EXCL); if ((rtglock_flags & XFS_RTGLOCK_RMAP) && rtg_rmap(rtg)) xfs_iunlock(rtg_rmap(rtg), XFS_ILOCK_EXCL); if (!xfs_has_zoned(rtg_mount(rtg))) { if (rtglock_flags & XFS_RTGLOCK_BITMAP) { xfs_iunlock(rtg_summary(rtg), XFS_ILOCK_EXCL); xfs_iunlock(rtg_bitmap(rtg), XFS_ILOCK_EXCL); } else if (rtglock_flags & XFS_RTGLOCK_BITMAP_SHARED) { xfs_iunlock(rtg_bitmap(rtg), XFS_ILOCK_SHARED); } } } /* * Join realtime group metadata inodes to the transaction. The ILOCKs will be * released on transaction commit. */ void xfs_rtgroup_trans_join( struct xfs_trans *tp, struct xfs_rtgroup *rtg, unsigned int rtglock_flags) { ASSERT(!(rtglock_flags & ~XFS_RTGLOCK_ALL_FLAGS)); ASSERT(!(rtglock_flags & XFS_RTGLOCK_BITMAP_SHARED)); if (!xfs_has_zoned(rtg_mount(rtg)) && (rtglock_flags & XFS_RTGLOCK_BITMAP)) { xfs_trans_ijoin(tp, rtg_bitmap(rtg), XFS_ILOCK_EXCL); xfs_trans_ijoin(tp, rtg_summary(rtg), XFS_ILOCK_EXCL); } if ((rtglock_flags & XFS_RTGLOCK_RMAP) && rtg_rmap(rtg)) xfs_trans_ijoin(tp, rtg_rmap(rtg), XFS_ILOCK_EXCL); if ((rtglock_flags & XFS_RTGLOCK_REFCOUNT) && rtg_refcount(rtg)) xfs_trans_ijoin(tp, rtg_refcount(rtg), XFS_ILOCK_EXCL); } /* Retrieve rt group geometry. */ int xfs_rtgroup_get_geometry( struct xfs_rtgroup *rtg, struct xfs_rtgroup_geometry *rgeo) { /* Fill out form. */ memset(rgeo, 0, sizeof(*rgeo)); rgeo->rg_number = rtg_rgno(rtg); rgeo->rg_length = rtg_blocks(rtg); xfs_rtgroup_geom_health(rtg, rgeo); return 0; } #ifdef CONFIG_PROVE_LOCKING static struct lock_class_key xfs_rtginode_lock_class; static int xfs_rtginode_ilock_cmp_fn( const struct lockdep_map *m1, const struct lockdep_map *m2) { const struct xfs_inode *ip1 = container_of(m1, struct xfs_inode, i_lock.dep_map); const struct xfs_inode *ip2 = container_of(m2, struct xfs_inode, i_lock.dep_map); if (ip1->i_projid < ip2->i_projid) return -1; if (ip1->i_projid > ip2->i_projid) return 1; return 0; } static inline void xfs_rtginode_ilock_print_fn( const struct lockdep_map *m) { const struct xfs_inode *ip = container_of(m, struct xfs_inode, i_lock.dep_map); printk(KERN_CONT " rgno=%u metatype=%s", ip->i_projid, xfs_metafile_type_str(ip->i_metatype)); } /* * Most of the time each of the RTG inode locks are only taken one at a time. * But when committing deferred ops, more than one of a kind can be taken. * However, deferred rt ops will be committed in rgno order so there is no * potential for deadlocks. The code here is needed to tell lockdep about this * order. */ static inline void xfs_rtginode_lockdep_setup( struct xfs_inode *ip, xfs_rgnumber_t rgno, enum xfs_rtg_inodes type) { lockdep_set_class_and_subclass(&ip->i_lock, &xfs_rtginode_lock_class, type); lock_set_cmp_fn(&ip->i_lock, xfs_rtginode_ilock_cmp_fn, xfs_rtginode_ilock_print_fn); } #else #define xfs_rtginode_lockdep_setup(ip, rgno, type) do { } while (0) #endif /* CONFIG_PROVE_LOCKING */ struct xfs_rtginode_ops { const char *name; /* short name */ enum xfs_metafile_type metafile_type; unsigned int sick; /* rtgroup sickness flag */ unsigned int fmt_mask; /* all valid data fork formats */ /* Does the fs have this feature? */ bool (*enabled)(const struct xfs_mount *mp); /* Create this rtgroup metadata inode and initialize it. */ int (*create)(struct xfs_rtgroup *rtg, struct xfs_inode *ip, struct xfs_trans *tp, bool init); }; static const struct xfs_rtginode_ops xfs_rtginode_ops[XFS_RTGI_MAX] = { [XFS_RTGI_BITMAP] = { .name = "bitmap", .metafile_type = XFS_METAFILE_RTBITMAP, .sick = XFS_SICK_RG_BITMAP, .fmt_mask = (1U << XFS_DINODE_FMT_EXTENTS) | (1U << XFS_DINODE_FMT_BTREE), .enabled = xfs_has_nonzoned, .create = xfs_rtbitmap_create, }, [XFS_RTGI_SUMMARY] = { .name = "summary", .metafile_type = XFS_METAFILE_RTSUMMARY, .sick = XFS_SICK_RG_SUMMARY, .fmt_mask = (1U << XFS_DINODE_FMT_EXTENTS) | (1U << XFS_DINODE_FMT_BTREE), .enabled = xfs_has_nonzoned, .create = xfs_rtsummary_create, }, [XFS_RTGI_RMAP] = { .name = "rmap", .metafile_type = XFS_METAFILE_RTRMAP, .sick = XFS_SICK_RG_RMAPBT, .fmt_mask = 1U << XFS_DINODE_FMT_META_BTREE, /* * growfs must create the rtrmap inodes before adding a * realtime volume to the filesystem, so we cannot use the * rtrmapbt predicate here. */ .enabled = xfs_has_rmapbt, .create = xfs_rtrmapbt_create, }, [XFS_RTGI_REFCOUNT] = { .name = "refcount", .metafile_type = XFS_METAFILE_RTREFCOUNT, .sick = XFS_SICK_RG_REFCNTBT, .fmt_mask = 1U << XFS_DINODE_FMT_META_BTREE, /* same comment about growfs and rmap inodes applies here */ .enabled = xfs_has_reflink, .create = xfs_rtrefcountbt_create, }, }; /* Return the shortname of this rtgroup inode. */ const char * xfs_rtginode_name( enum xfs_rtg_inodes type) { return xfs_rtginode_ops[type].name; } /* Return the metafile type of this rtgroup inode. */ enum xfs_metafile_type xfs_rtginode_metafile_type( enum xfs_rtg_inodes type) { return xfs_rtginode_ops[type].metafile_type; } /* Should this rtgroup inode be present? */ bool xfs_rtginode_enabled( struct xfs_rtgroup *rtg, enum xfs_rtg_inodes type) { const struct xfs_rtginode_ops *ops = &xfs_rtginode_ops[type]; if (!ops->enabled) return true; return ops->enabled(rtg_mount(rtg)); } /* Mark an rtgroup inode sick */ void xfs_rtginode_mark_sick( struct xfs_rtgroup *rtg, enum xfs_rtg_inodes type) { const struct xfs_rtginode_ops *ops = &xfs_rtginode_ops[type]; xfs_group_mark_sick(rtg_group(rtg), ops->sick); } /* Load and existing rtgroup inode into the rtgroup structure. */ int xfs_rtginode_load( struct xfs_rtgroup *rtg, enum xfs_rtg_inodes type, struct xfs_trans *tp) { struct xfs_mount *mp = tp->t_mountp; struct xfs_inode *ip; const struct xfs_rtginode_ops *ops = &xfs_rtginode_ops[type]; int error; if (!xfs_rtginode_enabled(rtg, type)) return 0; if (!xfs_has_rtgroups(mp)) { xfs_ino_t ino; switch (type) { case XFS_RTGI_BITMAP: ino = mp->m_sb.sb_rbmino; break; case XFS_RTGI_SUMMARY: ino = mp->m_sb.sb_rsumino; break; default: /* None of the other types exist on !rtgroups */ return 0; } error = xfs_trans_metafile_iget(tp, ino, ops->metafile_type, &ip); } else { const char *path; if (!mp->m_rtdirip) { xfs_fs_mark_sick(mp, XFS_SICK_FS_METADIR); return -EFSCORRUPTED; } path = xfs_rtginode_path(rtg_rgno(rtg), type); if (!path) return -ENOMEM; error = xfs_metadir_load(tp, mp->m_rtdirip, path, ops->metafile_type, &ip); kfree(path); } if (error) { if (xfs_metadata_is_sick(error)) xfs_rtginode_mark_sick(rtg, type); return error; } if (XFS_IS_CORRUPT(mp, !((1U << ip->i_df.if_format) & ops->fmt_mask))) { xfs_irele(ip); xfs_rtginode_mark_sick(rtg, type); return -EFSCORRUPTED; } if (XFS_IS_CORRUPT(mp, ip->i_projid != rtg_rgno(rtg))) { xfs_irele(ip); xfs_rtginode_mark_sick(rtg, type); return -EFSCORRUPTED; } xfs_rtginode_lockdep_setup(ip, rtg_rgno(rtg), type); rtg->rtg_inodes[type] = ip; return 0; } /* Release an rtgroup metadata inode. */ void xfs_rtginode_irele( struct xfs_inode **ipp) { if (*ipp) xfs_irele(*ipp); *ipp = NULL; } /* Add a metadata inode for a realtime rmap btree. */ int xfs_rtginode_create( struct xfs_rtgroup *rtg, enum xfs_rtg_inodes type, bool init) { const struct xfs_rtginode_ops *ops = &xfs_rtginode_ops[type]; struct xfs_mount *mp = rtg_mount(rtg); struct xfs_metadir_update upd = { .dp = mp->m_rtdirip, .metafile_type = ops->metafile_type, }; int error; if (!xfs_rtginode_enabled(rtg, type)) return 0; if (!mp->m_rtdirip) { xfs_fs_mark_sick(mp, XFS_SICK_FS_METADIR); return -EFSCORRUPTED; } upd.path = xfs_rtginode_path(rtg_rgno(rtg), type); if (!upd.path) return -ENOMEM; error = xfs_metadir_start_create(&upd); if (error) goto out_path; error = xfs_metadir_create(&upd, S_IFREG); if (error) goto out_cancel; xfs_rtginode_lockdep_setup(upd.ip, rtg_rgno(rtg), type); upd.ip->i_projid = rtg_rgno(rtg); error = ops->create(rtg, upd.ip, upd.tp, init); if (error) goto out_cancel; error = xfs_metadir_commit(&upd); if (error) goto out_path; kfree(upd.path); xfs_finish_inode_setup(upd.ip); rtg->rtg_inodes[type] = upd.ip; return 0; out_cancel: xfs_metadir_cancel(&upd, error); /* Have to finish setting up the inode to ensure it's deleted. */ if (upd.ip) { xfs_finish_inode_setup(upd.ip); xfs_irele(upd.ip); } out_path: kfree(upd.path); return error; } /* Create the parent directory for all rtgroup inodes and load it. */ int xfs_rtginode_mkdir_parent( struct xfs_mount *mp) { if (!mp->m_metadirip) { xfs_fs_mark_sick(mp, XFS_SICK_FS_METADIR); return -EFSCORRUPTED; } return xfs_metadir_mkdir(mp->m_metadirip, "rtgroups", &mp->m_rtdirip); } /* Load the parent directory of all rtgroup inodes. */ int xfs_rtginode_load_parent( struct xfs_trans *tp) { struct xfs_mount *mp = tp->t_mountp; if (!mp->m_metadirip) { xfs_fs_mark_sick(mp, XFS_SICK_FS_METADIR); return -EFSCORRUPTED; } return xfs_metadir_load(tp, mp->m_metadirip, "rtgroups", XFS_METAFILE_DIR, &mp->m_rtdirip); } /* Check superblock fields for a read or a write. */ static xfs_failaddr_t xfs_rtsb_verify_common( struct xfs_buf *bp) { struct xfs_rtsb *rsb = bp->b_addr; if (!xfs_verify_magic(bp, rsb->rsb_magicnum)) return __this_address; if (rsb->rsb_pad) return __this_address; /* Everything to the end of the fs block must be zero */ if (memchr_inv(rsb + 1, 0, BBTOB(bp->b_length) - sizeof(*rsb))) return __this_address; return NULL; } /* Check superblock fields for a read or revalidation. */ static inline xfs_failaddr_t xfs_rtsb_verify_all( struct xfs_buf *bp) { struct xfs_rtsb *rsb = bp->b_addr; struct xfs_mount *mp = bp->b_mount; xfs_failaddr_t fa; fa = xfs_rtsb_verify_common(bp); if (fa) return fa; if (memcmp(&rsb->rsb_fname, &mp->m_sb.sb_fname, XFSLABEL_MAX)) return __this_address; if (!uuid_equal(&rsb->rsb_uuid, &mp->m_sb.sb_uuid)) return __this_address; if (!uuid_equal(&rsb->rsb_meta_uuid, &mp->m_sb.sb_meta_uuid)) return __this_address; return NULL; } static void xfs_rtsb_read_verify( struct xfs_buf *bp) { xfs_failaddr_t fa; if (!xfs_buf_verify_cksum(bp, XFS_RTSB_CRC_OFF)) { xfs_verifier_error(bp, -EFSBADCRC, __this_address); return; } fa = xfs_rtsb_verify_all(bp); if (fa) xfs_verifier_error(bp, -EFSCORRUPTED, fa); } static void xfs_rtsb_write_verify( struct xfs_buf *bp) { xfs_failaddr_t fa; fa = xfs_rtsb_verify_common(bp); if (fa) { xfs_verifier_error(bp, -EFSCORRUPTED, fa); return; } xfs_buf_update_cksum(bp, XFS_RTSB_CRC_OFF); } const struct xfs_buf_ops xfs_rtsb_buf_ops = { .name = "xfs_rtsb", .magic = { 0, cpu_to_be32(XFS_RTSB_MAGIC) }, .verify_read = xfs_rtsb_read_verify, .verify_write = xfs_rtsb_write_verify, .verify_struct = xfs_rtsb_verify_all, }; /* Update a realtime superblock from the primary fs super */ void xfs_update_rtsb( struct xfs_buf *rtsb_bp, const struct xfs_buf *sb_bp) { const struct xfs_dsb *dsb = sb_bp->b_addr; struct xfs_rtsb *rsb = rtsb_bp->b_addr; const uuid_t *meta_uuid; rsb->rsb_magicnum = cpu_to_be32(XFS_RTSB_MAGIC); rsb->rsb_pad = 0; memcpy(&rsb->rsb_fname, &dsb->sb_fname, XFSLABEL_MAX); memcpy(&rsb->rsb_uuid, &dsb->sb_uuid, sizeof(rsb->rsb_uuid)); /* * The metadata uuid is the fs uuid if the metauuid feature is not * enabled. */ if (dsb->sb_features_incompat & cpu_to_be32(XFS_SB_FEAT_INCOMPAT_META_UUID)) meta_uuid = &dsb->sb_meta_uuid; else meta_uuid = &dsb->sb_uuid; memcpy(&rsb->rsb_meta_uuid, meta_uuid, sizeof(rsb->rsb_meta_uuid)); } /* * Update the realtime superblock from a filesystem superblock and log it to * the given transaction. */ struct xfs_buf * xfs_log_rtsb( struct xfs_trans *tp, const struct xfs_buf *sb_bp) { struct xfs_buf *rtsb_bp; if (!xfs_has_rtsb(tp->t_mountp)) return NULL; rtsb_bp = xfs_trans_getrtsb(tp); if (!rtsb_bp) { /* * It's possible for the rtgroups feature to be enabled but * there is no incore rt superblock buffer if the rt geometry * was specified at mkfs time but the rt section has not yet * been attached. In this case, rblocks must be zero. */ ASSERT(tp->t_mountp->m_sb.sb_rblocks == 0); return NULL; } xfs_update_rtsb(rtsb_bp, sb_bp); xfs_trans_ordered_buf(tp, rtsb_bp); return rtsb_bp; } |
738 738 1 26 3104 18 18998 2 3 2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 | /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _ASM_X86_MSR_H #define _ASM_X86_MSR_H #include "msr-index.h" #ifndef __ASSEMBLER__ #include <asm/asm.h> #include <asm/errno.h> #include <asm/cpumask.h> #include <uapi/asm/msr.h> #include <asm/shared/msr.h> #include <linux/percpu.h> struct msr_info { u32 msr_no; struct msr reg; struct msr __percpu *msrs; int err; }; struct msr_regs_info { u32 *regs; int err; }; struct saved_msr { bool valid; struct msr_info info; }; struct saved_msrs { unsigned int num; struct saved_msr *array; }; /* * both i386 and x86_64 returns 64-bit value in edx:eax, but gcc's "A" * constraint has different meanings. For i386, "A" means exactly * edx:eax, while for x86_64 it doesn't mean rdx:rax or edx:eax. Instead, * it means rax *or* rdx. */ #ifdef CONFIG_X86_64 /* Using 64-bit values saves one instruction clearing the high half of low */ #define DECLARE_ARGS(val, low, high) unsigned long low, high #define EAX_EDX_VAL(val, low, high) ((low) | (high) << 32) #define EAX_EDX_RET(val, low, high) "=a" (low), "=d" (high) #else #define DECLARE_ARGS(val, low, high) unsigned long long val #define EAX_EDX_VAL(val, low, high) (val) #define EAX_EDX_RET(val, low, high) "=A" (val) #endif /* * Be very careful with includes. This header is prone to include loops. */ #include <asm/atomic.h> #include <linux/tracepoint-defs.h> #ifdef CONFIG_TRACEPOINTS DECLARE_TRACEPOINT(read_msr); DECLARE_TRACEPOINT(write_msr); DECLARE_TRACEPOINT(rdpmc); extern void do_trace_write_msr(unsigned int msr, u64 val, int failed); extern void do_trace_read_msr(unsigned int msr, u64 val, int failed); extern void do_trace_rdpmc(unsigned int msr, u64 val, int failed); #else static inline void do_trace_write_msr(unsigned int msr, u64 val, int failed) {} static inline void do_trace_read_msr(unsigned int msr, u64 val, int failed) {} static inline void do_trace_rdpmc(unsigned int msr, u64 val, int failed) {} #endif /* * __rdmsr() and __wrmsr() are the two primitives which are the bare minimum MSR * accessors and should not have any tracing or other functionality piggybacking * on them - those are *purely* for accessing MSRs and nothing more. So don't even * think of extending them - you will be slapped with a stinking trout or a frozen * shark will reach you, wherever you are! You've been warned. */ static __always_inline unsigned long long __rdmsr(unsigned int msr) { DECLARE_ARGS(val, low, high); asm volatile("1: rdmsr\n" "2:\n" _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_RDMSR) : EAX_EDX_RET(val, low, high) : "c" (msr)); return EAX_EDX_VAL(val, low, high); } static __always_inline void __wrmsr(unsigned int msr, u32 low, u32 high) { asm volatile("1: wrmsr\n" "2:\n" _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_WRMSR) : : "c" (msr), "a"(low), "d" (high) : "memory"); } #define native_rdmsr(msr, val1, val2) \ do { \ u64 __val = __rdmsr((msr)); \ (void)((val1) = (u32)__val); \ (void)((val2) = (u32)(__val >> 32)); \ } while (0) #define native_wrmsr(msr, low, high) \ __wrmsr(msr, low, high) #define native_wrmsrl(msr, val) \ __wrmsr((msr), (u32)((u64)(val)), \ (u32)((u64)(val) >> 32)) static inline unsigned long long native_read_msr(unsigned int msr) { unsigned long long val; val = __rdmsr(msr); if (tracepoint_enabled(read_msr)) do_trace_read_msr(msr, val, 0); return val; } static inline unsigned long long native_read_msr_safe(unsigned int msr, int *err) { DECLARE_ARGS(val, low, high); asm volatile("1: rdmsr ; xor %[err],%[err]\n" "2:\n\t" _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_RDMSR_SAFE, %[err]) : [err] "=r" (*err), EAX_EDX_RET(val, low, high) : "c" (msr)); if (tracepoint_enabled(read_msr)) do_trace_read_msr(msr, EAX_EDX_VAL(val, low, high), *err); return EAX_EDX_VAL(val, low, high); } /* Can be uninlined because referenced by paravirt */ static inline void notrace native_write_msr(unsigned int msr, u32 low, u32 high) { __wrmsr(msr, low, high); if (tracepoint_enabled(write_msr)) do_trace_write_msr(msr, ((u64)high << 32 | low), 0); } /* Can be uninlined because referenced by paravirt */ static inline int notrace native_write_msr_safe(unsigned int msr, u32 low, u32 high) { int err; asm volatile("1: wrmsr ; xor %[err],%[err]\n" "2:\n\t" _ASM_EXTABLE_TYPE_REG(1b, 2b, EX_TYPE_WRMSR_SAFE, %[err]) : [err] "=a" (err) : "c" (msr), "0" (low), "d" (high) : "memory"); if (tracepoint_enabled(write_msr)) do_trace_write_msr(msr, ((u64)high << 32 | low), err); return err; } extern int rdmsr_safe_regs(u32 regs[8]); extern int wrmsr_safe_regs(u32 regs[8]); /** * rdtsc() - returns the current TSC without ordering constraints * * rdtsc() returns the result of RDTSC as a 64-bit integer. The * only ordering constraint it supplies is the ordering implied by * "asm volatile": it will put the RDTSC in the place you expect. The * CPU can and will speculatively execute that RDTSC, though, so the * results can be non-monotonic if compared on different CPUs. */ static __always_inline unsigned long long rdtsc(void) { DECLARE_ARGS(val, low, high); asm volatile("rdtsc" : EAX_EDX_RET(val, low, high)); return EAX_EDX_VAL(val, low, high); } /** * rdtsc_ordered() - read the current TSC in program order * * rdtsc_ordered() returns the result of RDTSC as a 64-bit integer. * It is ordered like a load to a global in-memory counter. It should * be impossible to observe non-monotonic rdtsc_unordered() behavior * across multiple CPUs as long as the TSC is synced. */ static __always_inline unsigned long long rdtsc_ordered(void) { DECLARE_ARGS(val, low, high); /* * The RDTSC instruction is not ordered relative to memory * access. The Intel SDM and the AMD APM are both vague on this * point, but empirically an RDTSC instruction can be * speculatively executed before prior loads. An RDTSC * immediately after an appropriate barrier appears to be * ordered as a normal load, that is, it provides the same * ordering guarantees as reading from a global memory location * that some other imaginary CPU is updating continuously with a * time stamp. * * Thus, use the preferred barrier on the respective CPU, aiming for * RDTSCP as the default. */ asm volatile(ALTERNATIVE_2("rdtsc", "lfence; rdtsc", X86_FEATURE_LFENCE_RDTSC, "rdtscp", X86_FEATURE_RDTSCP) : EAX_EDX_RET(val, low, high) /* RDTSCP clobbers ECX with MSR_TSC_AUX. */ :: "ecx"); return EAX_EDX_VAL(val, low, high); } static inline unsigned long long native_read_pmc(int counter) { DECLARE_ARGS(val, low, high); asm volatile("rdpmc" : EAX_EDX_RET(val, low, high) : "c" (counter)); if (tracepoint_enabled(rdpmc)) do_trace_rdpmc(counter, EAX_EDX_VAL(val, low, high), 0); return EAX_EDX_VAL(val, low, high); } #ifdef CONFIG_PARAVIRT_XXL #include <asm/paravirt.h> #else #include <linux/errno.h> /* * Access to machine-specific registers (available on 586 and better only) * Note: the rd* operations modify the parameters directly (without using * pointer indirection), this allows gcc to optimize better */ #define rdmsr(msr, low, high) \ do { \ u64 __val = native_read_msr((msr)); \ (void)((low) = (u32)__val); \ (void)((high) = (u32)(__val >> 32)); \ } while (0) static inline void wrmsr(unsigned int msr, u32 low, u32 high) { native_write_msr(msr, low, high); } #define rdmsrl(msr, val) \ ((val) = native_read_msr((msr))) static inline void wrmsrl(unsigned int msr, u64 val) { native_write_msr(msr, (u32)(val & 0xffffffffULL), (u32)(val >> 32)); } /* wrmsr with exception handling */ static inline int wrmsr_safe(unsigned int msr, u32 low, u32 high) { return native_write_msr_safe(msr, low, high); } /* rdmsr with exception handling */ #define rdmsr_safe(msr, low, high) \ ({ \ int __err; \ u64 __val = native_read_msr_safe((msr), &__err); \ (*low) = (u32)__val; \ (*high) = (u32)(__val >> 32); \ __err; \ }) static inline int rdmsrl_safe(unsigned int msr, unsigned long long *p) { int err; *p = native_read_msr_safe(msr, &err); return err; } #define rdpmc(counter, low, high) \ do { \ u64 _l = native_read_pmc((counter)); \ (low) = (u32)_l; \ (high) = (u32)(_l >> 32); \ } while (0) #define rdpmcl(counter, val) ((val) = native_read_pmc(counter)) #endif /* !CONFIG_PARAVIRT_XXL */ /* Instruction opcode for WRMSRNS supported in binutils >= 2.40 */ #define WRMSRNS _ASM_BYTES(0x0f,0x01,0xc6) /* Non-serializing WRMSR, when available. Falls back to a serializing WRMSR. */ static __always_inline void wrmsrns(u32 msr, u64 val) { /* * WRMSR is 2 bytes. WRMSRNS is 3 bytes. Pad WRMSR with a redundant * DS prefix to avoid a trailing NOP. */ asm volatile("1: " ALTERNATIVE("ds wrmsr", WRMSRNS, X86_FEATURE_WRMSRNS) "2: " _ASM_EXTABLE_TYPE(1b, 2b, EX_TYPE_WRMSR) : : "c" (msr), "a" ((u32)val), "d" ((u32)(val >> 32))); } /* * 64-bit version of wrmsr_safe(): */ static inline int wrmsrl_safe(u32 msr, u64 val) { return wrmsr_safe(msr, (u32)val, (u32)(val >> 32)); } struct msr __percpu *msrs_alloc(void); void msrs_free(struct msr __percpu *msrs); int msr_set_bit(u32 msr, u8 bit); int msr_clear_bit(u32 msr, u8 bit); #ifdef CONFIG_SMP int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h); int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h); int rdmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 *q); int wrmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 q); void rdmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs); void wrmsr_on_cpus(const struct cpumask *mask, u32 msr_no, struct msr __percpu *msrs); int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h); int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h); int rdmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q); int wrmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q); int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]); int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]); #else /* CONFIG_SMP */ static inline int rdmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h) { rdmsr(msr_no, *l, *h); return 0; } static inline int wrmsr_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h) { wrmsr(msr_no, l, h); return 0; } static inline int rdmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 *q) { rdmsrl(msr_no, *q); return 0; } static inline int wrmsrl_on_cpu(unsigned int cpu, u32 msr_no, u64 q) { wrmsrl(msr_no, q); return 0; } static inline void rdmsr_on_cpus(const struct cpumask *m, u32 msr_no, struct msr __percpu *msrs) { rdmsr_on_cpu(0, msr_no, raw_cpu_ptr(&msrs->l), raw_cpu_ptr(&msrs->h)); } static inline void wrmsr_on_cpus(const struct cpumask *m, u32 msr_no, struct msr __percpu *msrs) { wrmsr_on_cpu(0, msr_no, raw_cpu_read(msrs->l), raw_cpu_read(msrs->h)); } static inline int rdmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 *l, u32 *h) { return rdmsr_safe(msr_no, l, h); } static inline int wrmsr_safe_on_cpu(unsigned int cpu, u32 msr_no, u32 l, u32 h) { return wrmsr_safe(msr_no, l, h); } static inline int rdmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 *q) { return rdmsrl_safe(msr_no, q); } static inline int wrmsrl_safe_on_cpu(unsigned int cpu, u32 msr_no, u64 q) { return wrmsrl_safe(msr_no, q); } static inline int rdmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]) { return rdmsr_safe_regs(regs); } static inline int wrmsr_safe_regs_on_cpu(unsigned int cpu, u32 regs[8]) { return wrmsr_safe_regs(regs); } #endif /* CONFIG_SMP */ #endif /* __ASSEMBLER__ */ #endif /* _ASM_X86_MSR_H */ |
2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 | /* SPDX-License-Identifier: GPL-2.0 */ #undef TRACE_SYSTEM #define TRACE_SYSTEM udp #if !defined(_TRACE_UDP_H) || defined(TRACE_HEADER_MULTI_READ) #define _TRACE_UDP_H #include <linux/udp.h> #include <linux/tracepoint.h> #include <trace/events/net_probe_common.h> TRACE_EVENT(udp_fail_queue_rcv_skb, TP_PROTO(int rc, struct sock *sk, struct sk_buff *skb), TP_ARGS(rc, sk, skb), TP_STRUCT__entry( __field(int, rc) __field(__u16, sport) __field(__u16, dport) __field(__u16, family) __array(__u8, saddr, sizeof(struct sockaddr_in6)) __array(__u8, daddr, sizeof(struct sockaddr_in6)) ), TP_fast_assign( const struct udphdr *uh = (const struct udphdr *)udp_hdr(skb); __entry->rc = rc; /* for filtering use */ __entry->sport = ntohs(uh->source); __entry->dport = ntohs(uh->dest); __entry->family = sk->sk_family; memset(__entry->saddr, 0, sizeof(struct sockaddr_in6)); memset(__entry->daddr, 0, sizeof(struct sockaddr_in6)); TP_STORE_ADDR_PORTS_SKB(skb, uh, __entry->saddr, __entry->daddr); ), TP_printk("rc=%d family=%s src=%pISpc dest=%pISpc", __entry->rc, show_family_name(__entry->family), __entry->saddr, __entry->daddr) ); #endif /* _TRACE_UDP_H */ /* This part must be outside protection */ #include <trace/define_trace.h> |
3 1 3 2 2 2 2 3 1 2 2 3 2 1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 | // SPDX-License-Identifier: GPL-2.0 #include <linux/ptrace.h> #include <linux/sched.h> #include <linux/sched/task_stack.h> #include <linux/export.h> #include <asm/syscall.h> static int collect_syscall(struct task_struct *target, struct syscall_info *info) { unsigned long args[6] = { }; struct pt_regs *regs; if (!try_get_task_stack(target)) { /* Task has no stack, so the task isn't in a syscall. */ memset(info, 0, sizeof(*info)); info->data.nr = -1; return 0; } regs = task_pt_regs(target); if (unlikely(!regs)) { put_task_stack(target); return -EAGAIN; } info->sp = user_stack_pointer(regs); info->data.instruction_pointer = instruction_pointer(regs); info->data.nr = syscall_get_nr(target, regs); if (info->data.nr != -1L) syscall_get_arguments(target, regs, args); info->data.args[0] = args[0]; info->data.args[1] = args[1]; info->data.args[2] = args[2]; info->data.args[3] = args[3]; info->data.args[4] = args[4]; info->data.args[5] = args[5]; put_task_stack(target); return 0; } /** * task_current_syscall - Discover what a blocked task is doing. * @target: thread to examine * @info: structure with the following fields: * .sp - filled with user stack pointer * .data.nr - filled with system call number or -1 * .data.args - filled with @maxargs system call arguments * .data.instruction_pointer - filled with user PC * * If @target is blocked in a system call, returns zero with @info.data.nr * set to the call's number and @info.data.args filled in with its * arguments. Registers not used for system call arguments may not be available * and it is not kosher to use &struct user_regset calls while the system * call is still in progress. Note we may get this result if @target * has finished its system call but not yet returned to user mode, such * as when it's stopped for signal handling or syscall exit tracing. * * If @target is blocked in the kernel during a fault or exception, * returns zero with *@info.data.nr set to -1 and does not fill in * @info.data.args. If so, it's now safe to examine @target using * &struct user_regset get() calls as long as we're sure @target won't return * to user mode. * * Returns -%EAGAIN if @target does not remain blocked. */ int task_current_syscall(struct task_struct *target, struct syscall_info *info) { unsigned long ncsw; unsigned int state; if (target == current) return collect_syscall(target, info); state = READ_ONCE(target->__state); if (unlikely(!state)) return -EAGAIN; ncsw = wait_task_inactive(target, state); if (unlikely(!ncsw) || unlikely(collect_syscall(target, info)) || unlikely(wait_task_inactive(target, state) != ncsw)) return -EAGAIN; return 0; } |
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 5 1 1 5 4 3 1 3 3 2 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 | // SPDX-License-Identifier: GPL-2.0-or-later /* * USB ATI Remote support * * Copyright (c) 2011, 2012 Anssi Hannula <anssi.hannula@iki.fi> * Version 2.2.0 Copyright (c) 2004 Torrey Hoffman <thoffman@arnor.net> * Version 2.1.1 Copyright (c) 2002 Vladimir Dergachev * * This 2.2.0 version is a rewrite / cleanup of the 2.1.1 driver, including * porting to the 2.6 kernel interfaces, along with other modification * to better match the style of the existing usb/input drivers. However, the * protocol and hardware handling is essentially unchanged from 2.1.1. * * The 2.1.1 driver was derived from the usbati_remote and usbkbd drivers by * Vojtech Pavlik. * * Changes: * * Feb 2004: Torrey Hoffman <thoffman@arnor.net> * Version 2.2.0 * Jun 2004: Torrey Hoffman <thoffman@arnor.net> * Version 2.2.1 * Added key repeat support contributed by: * Vincent Vanackere <vanackere@lif.univ-mrs.fr> * Added support for the "Lola" remote contributed by: * Seth Cohn <sethcohn@yahoo.com> * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * Hardware & software notes * * These remote controls are distributed by ATI as part of their * "All-In-Wonder" video card packages. The receiver self-identifies as a * "USB Receiver" with manufacturer "X10 Wireless Technology Inc". * * The "Lola" remote is available from X10. See: * http://www.x10.com/products/lola_sg1.htm * The Lola is similar to the ATI remote but has no mouse support, and slightly * different keys. * * It is possible to use multiple receivers and remotes on multiple computers * simultaneously by configuring them to use specific channels. * * The RF protocol used by the remote supports 16 distinct channels, 1 to 16. * Actually, it may even support more, at least in some revisions of the * hardware. * * Each remote can be configured to transmit on one channel as follows: * - Press and hold the "hand icon" button. * - When the red LED starts to blink, let go of the "hand icon" button. * - When it stops blinking, input the channel code as two digits, from 01 * to 16, and press the hand icon again. * * The timing can be a little tricky. Try loading the module with debug=1 * to have the kernel print out messages about the remote control number * and mask. Note: debugging prints remote numbers as zero-based hexadecimal. * * The driver has a "channel_mask" parameter. This bitmask specifies which * channels will be ignored by the module. To mask out channels, just add * all the 2^channel_number values together. * * For instance, set channel_mask = 2^4 = 16 (binary 10000) to make ati_remote * ignore signals coming from remote controls transmitting on channel 4, but * accept all other channels. * * Or, set channel_mask = 65533, (0xFFFD), and all channels except 1 will be * ignored. * * The default is 0 (respond to all channels). Bit 0 and bits 17-32 of this * parameter are unused. */ #include <linux/kernel.h> #include <linux/errno.h> #include <linux/init.h> #include <linux/slab.h> #include <linux/module.h> #include <linux/mutex.h> #include <linux/usb/input.h> #include <linux/wait.h> #include <linux/jiffies.h> #include <media/rc-core.h> /* * Module and Version Information, Module Parameters */ #define ATI_REMOTE_VENDOR_ID 0x0bc7 #define LOLA_REMOTE_PRODUCT_ID 0x0002 #define LOLA2_REMOTE_PRODUCT_ID 0x0003 #define ATI_REMOTE_PRODUCT_ID 0x0004 #define NVIDIA_REMOTE_PRODUCT_ID 0x0005 #define MEDION_REMOTE_PRODUCT_ID 0x0006 #define FIREFLY_REMOTE_PRODUCT_ID 0x0008 #define DRIVER_VERSION "2.2.1" #define DRIVER_AUTHOR "Torrey Hoffman <thoffman@arnor.net>" #define DRIVER_DESC "ATI/X10 RF USB Remote Control" #define NAME_BUFSIZE 80 /* size of product name, path buffers */ #define DATA_BUFSIZE 63 /* size of URB data buffers */ /* * Duplicate event filtering time. * Sequential, identical KIND_FILTERED inputs with less than * FILTER_TIME milliseconds between them are considered as repeat * events. The hardware generates 5 events for the first keypress * and we have to take this into account for an accurate repeat * behaviour. */ #define FILTER_TIME 60 /* msec */ #define REPEAT_DELAY 500 /* msec */ static unsigned long channel_mask; module_param(channel_mask, ulong, 0644); MODULE_PARM_DESC(channel_mask, "Bitmask of remote control channels to ignore"); static int debug; module_param(debug, int, 0644); MODULE_PARM_DESC(debug, "Enable extra debug messages and information"); static int repeat_filter = FILTER_TIME; module_param(repeat_filter, int, 0644); MODULE_PARM_DESC(repeat_filter, "Repeat filter time, default = 60 msec"); static int repeat_delay = REPEAT_DELAY; module_param(repeat_delay, int, 0644); MODULE_PARM_DESC(repeat_delay, "Delay before sending repeats, default = 500 msec"); static bool mouse = true; module_param(mouse, bool, 0444); MODULE_PARM_DESC(mouse, "Enable mouse device, default = yes"); #define dbginfo(dev, format, arg...) \ do { if (debug) dev_info(dev , format , ## arg); } while (0) struct ati_receiver_type { /* either default_keymap or get_default_keymap should be set */ const char *default_keymap; const char *(*get_default_keymap)(struct usb_interface *interface); }; static const char *get_medion_keymap(struct usb_interface *interface) { struct usb_device *udev = interface_to_usbdev(interface); /* * There are many different Medion remotes shipped with a receiver * with the same usb id, but the receivers have subtle differences * in the USB descriptors allowing us to detect them. */ if (udev->manufacturer && udev->product) { if (udev->actconfig->desc.bmAttributes & USB_CONFIG_ATT_WAKEUP) { if (!strcmp(udev->manufacturer, "X10 Wireless Technology Inc") && !strcmp(udev->product, "USB Receiver")) return RC_MAP_MEDION_X10_DIGITAINER; if (!strcmp(udev->manufacturer, "X10 WTI") && !strcmp(udev->product, "RF receiver")) return RC_MAP_MEDION_X10_OR2X; } else { if (!strcmp(udev->manufacturer, "X10 Wireless Technology Inc") && !strcmp(udev->product, "USB Receiver")) return RC_MAP_MEDION_X10; } } dev_info(&interface->dev, "Unknown Medion X10 receiver, using default ati_remote Medion keymap\n"); return RC_MAP_MEDION_X10; } static const struct ati_receiver_type type_ati = { .default_keymap = RC_MAP_ATI_X10 }; static const struct ati_receiver_type type_medion = { .get_default_keymap = get_medion_keymap }; static const struct ati_receiver_type type_firefly = { .default_keymap = RC_MAP_SNAPSTREAM_FIREFLY }; static const struct usb_device_id ati_remote_table[] = { { USB_DEVICE(ATI_REMOTE_VENDOR_ID, LOLA_REMOTE_PRODUCT_ID), .driver_info = (unsigned long)&type_ati }, { USB_DEVICE(ATI_REMOTE_VENDOR_ID, LOLA2_REMOTE_PRODUCT_ID), .driver_info = (unsigned long)&type_ati }, { USB_DEVICE(ATI_REMOTE_VENDOR_ID, ATI_REMOTE_PRODUCT_ID), .driver_info = (unsigned long)&type_ati }, { USB_DEVICE(ATI_REMOTE_VENDOR_ID, NVIDIA_REMOTE_PRODUCT_ID), .driver_info = (unsigned long)&type_ati }, { USB_DEVICE(ATI_REMOTE_VENDOR_ID, MEDION_REMOTE_PRODUCT_ID), .driver_info = (unsigned long)&type_medion }, { USB_DEVICE(ATI_REMOTE_VENDOR_ID, FIREFLY_REMOTE_PRODUCT_ID), .driver_info = (unsigned long)&type_firefly }, {} /* Terminating entry */ }; MODULE_DEVICE_TABLE(usb, ati_remote_table); /* Get hi and low bytes of a 16-bits int */ #define HI(a) ((unsigned char)((a) >> 8)) #define LO(a) ((unsigned char)((a) & 0xff)) #define SEND_FLAG_IN_PROGRESS 1 #define SEND_FLAG_COMPLETE 2 /* Device initialization strings */ static char init1[] = { 0x01, 0x00, 0x20, 0x14 }; static char init2[] = { 0x01, 0x00, 0x20, 0x14, 0x20, 0x20, 0x20 }; struct ati_remote { struct input_dev *idev; struct rc_dev *rdev; struct usb_device *udev; struct usb_interface *interface; struct urb *irq_urb; struct urb *out_urb; struct usb_endpoint_descriptor *endpoint_in; struct usb_endpoint_descriptor *endpoint_out; unsigned char *inbuf; unsigned char *outbuf; dma_addr_t inbuf_dma; dma_addr_t outbuf_dma; unsigned char old_data; /* Detect duplicate events */ unsigned long old_jiffies; unsigned long acc_jiffies; /* handle acceleration */ unsigned long first_jiffies; unsigned int repeat_count; char rc_name[NAME_BUFSIZE]; char rc_phys[NAME_BUFSIZE]; char mouse_name[NAME_BUFSIZE + 6]; char mouse_phys[NAME_BUFSIZE]; wait_queue_head_t wait; int send_flags; int users; /* 0-2, users are rc and input */ struct mutex open_mutex; }; /* "Kinds" of messages sent from the hardware to the driver. */ #define KIND_END 0 #define KIND_LITERAL 1 /* Simply pass to input system as EV_KEY */ #define KIND_FILTERED 2 /* Add artificial key-up events, drop keyrepeats */ #define KIND_ACCEL 3 /* Translate to EV_REL mouse-move events */ /* Translation table from hardware messages to input events. */ static const struct { unsigned char kind; unsigned char data; /* Raw key code from remote */ unsigned short code; /* Input layer translation */ } ati_remote_tbl[] = { /* Directional control pad axes. Code is xxyy */ {KIND_ACCEL, 0x70, 0xff00}, /* left */ {KIND_ACCEL, 0x71, 0x0100}, /* right */ {KIND_ACCEL, 0x72, 0x00ff}, /* up */ {KIND_ACCEL, 0x73, 0x0001}, /* down */ /* Directional control pad diagonals */ {KIND_ACCEL, 0x74, 0xffff}, /* left up */ {KIND_ACCEL, 0x75, 0x01ff}, /* right up */ {KIND_ACCEL, 0x77, 0xff01}, /* left down */ {KIND_ACCEL, 0x76, 0x0101}, /* right down */ /* "Mouse button" buttons. The code below uses the fact that the * lsbit of the raw code is a down/up indicator. */ {KIND_LITERAL, 0x78, BTN_LEFT}, /* left btn down */ {KIND_LITERAL, 0x79, BTN_LEFT}, /* left btn up */ {KIND_LITERAL, 0x7c, BTN_RIGHT},/* right btn down */ {KIND_LITERAL, 0x7d, BTN_RIGHT},/* right btn up */ /* Artificial "double-click" events are generated by the hardware. * They are mapped to the "side" and "extra" mouse buttons here. */ {KIND_FILTERED, 0x7a, BTN_SIDE}, /* left dblclick */ {KIND_FILTERED, 0x7e, BTN_EXTRA},/* right dblclick */ /* Non-mouse events are handled by rc-core */ {KIND_END, 0x00, 0} }; /* * ati_remote_dump_input */ static void ati_remote_dump(struct device *dev, unsigned char *data, unsigned int len) { if (len == 1) { if (data[0] != (unsigned char)0xff && data[0] != 0x00) dev_warn(dev, "Weird byte 0x%02x\n", data[0]); } else if (len == 4) dev_warn(dev, "Weird key %4ph\n", data); else dev_warn(dev, "Weird data, len=%d %6ph ...\n", len, data); } /* * ati_remote_open */ static int ati_remote_open(struct ati_remote *ati_remote) { int err = 0; mutex_lock(&ati_remote->open_mutex); if (ati_remote->users++ != 0) goto out; /* one was already active */ /* On first open, submit the read urb which was set up previously. */ ati_remote->irq_urb->dev = ati_remote->udev; if (usb_submit_urb(ati_remote->irq_urb, GFP_KERNEL)) { dev_err(&ati_remote->interface->dev, "%s: usb_submit_urb failed!\n", __func__); err = -EIO; } out: mutex_unlock(&ati_remote->open_mutex); return err; } /* * ati_remote_close */ static void ati_remote_close(struct ati_remote *ati_remote) { mutex_lock(&ati_remote->open_mutex); if (--ati_remote->users == 0) usb_kill_urb(ati_remote->irq_urb); mutex_unlock(&ati_remote->open_mutex); } static int ati_remote_input_open(struct input_dev *inputdev) { struct ati_remote *ati_remote = input_get_drvdata(inputdev); return ati_remote_open(ati_remote); } static void ati_remote_input_close(struct input_dev *inputdev) { struct ati_remote *ati_remote = input_get_drvdata(inputdev); ati_remote_close(ati_remote); } static int ati_remote_rc_open(struct rc_dev *rdev) { struct ati_remote *ati_remote = rdev->priv; return ati_remote_open(ati_remote); } static void ati_remote_rc_close(struct rc_dev *rdev) { struct ati_remote *ati_remote = rdev->priv; ati_remote_close(ati_remote); } /* * ati_remote_irq_out */ static void ati_remote_irq_out(struct urb *urb) { struct ati_remote *ati_remote = urb->context; if (urb->status) { dev_dbg(&ati_remote->interface->dev, "%s: status %d\n", __func__, urb->status); return; } ati_remote->send_flags |= SEND_FLAG_COMPLETE; wmb(); wake_up(&ati_remote->wait); } /* * ati_remote_sendpacket * * Used to send device initialization strings */ static int ati_remote_sendpacket(struct ati_remote *ati_remote, u16 cmd, unsigned char *data) { int retval = 0; /* Set up out_urb */ memcpy(ati_remote->out_urb->transfer_buffer + 1, data, LO(cmd)); ((char *) ati_remote->out_urb->transfer_buffer)[0] = HI(cmd); ati_remote->out_urb->transfer_buffer_length = LO(cmd) + 1; ati_remote->out_urb->dev = ati_remote->udev; ati_remote->send_flags = SEND_FLAG_IN_PROGRESS; retval = usb_submit_urb(ati_remote->out_urb, GFP_ATOMIC); if (retval) { dev_dbg(&ati_remote->interface->dev, "sendpacket: usb_submit_urb failed: %d\n", retval); return retval; } wait_event_timeout(ati_remote->wait, ((ati_remote->out_urb->status != -EINPROGRESS) || (ati_remote->send_flags & SEND_FLAG_COMPLETE)), HZ); usb_kill_urb(ati_remote->out_urb); return retval; } struct accel_times { const char value; unsigned int msecs; }; static const struct accel_times accel[] = { { 1, 125 }, { 2, 250 }, { 4, 500 }, { 6, 1000 }, { 9, 1500 }, { 13, 2000 }, { 20, 0 }, }; /* * ati_remote_compute_accel * * Implements acceleration curve for directional control pad * If elapsed time since last event is > 1/4 second, user "stopped", * so reset acceleration. Otherwise, user is probably holding the control * pad down, so we increase acceleration, ramping up over two seconds to * a maximum speed. */ static int ati_remote_compute_accel(struct ati_remote *ati_remote) { unsigned long now = jiffies, reset_time; int i; reset_time = msecs_to_jiffies(250); if (time_after(now, ati_remote->old_jiffies + reset_time)) { ati_remote->acc_jiffies = now; return 1; } for (i = 0; i < ARRAY_SIZE(accel) - 1; i++) { unsigned long timeout = msecs_to_jiffies(accel[i].msecs); if (time_before(now, ati_remote->acc_jiffies + timeout)) return accel[i].value; } return accel[i].value; } /* * ati_remote_report_input */ static void ati_remote_input_report(struct urb *urb) { struct ati_remote *ati_remote = urb->context; unsigned char *data= ati_remote->inbuf; struct input_dev *dev = ati_remote->idev; int index = -1; int remote_num; unsigned char scancode; u32 wheel_keycode = KEY_RESERVED; int i; /* * data[0] = 0x14 * data[1] = data[2] + data[3] + 0xd5 (a checksum byte) * data[2] = the key code (with toggle bit in MSB with some models) * data[3] = channel << 4 (the low 4 bits must be zero) */ /* Deal with strange looking inputs */ if ( urb->actual_length != 4 || data[0] != 0x14 || data[1] != (unsigned char)(data[2] + data[3] + 0xD5) || (data[3] & 0x0f) != 0x00) { ati_remote_dump(&urb->dev->dev, data, urb->actual_length); return; } if (data[1] != ((data[2] + data[3] + 0xd5) & 0xff)) { dbginfo(&ati_remote->interface->dev, "wrong checksum in input: %4ph\n", data); return; } /* Mask unwanted remote channels. */ /* note: remote_num is 0-based, channel 1 on remote == 0 here */ remote_num = (data[3] >> 4) & 0x0f; if (channel_mask & (1 << (remote_num + 1))) { dbginfo(&ati_remote->interface->dev, "Masked input from channel 0x%02x: data %02x, mask= 0x%02lx\n", remote_num, data[2], channel_mask); return; } /* * MSB is a toggle code, though only used by some devices * (e.g. SnapStream Firefly) */ scancode = data[2] & 0x7f; dbginfo(&ati_remote->interface->dev, "channel 0x%02x; key data %02x, scancode %02x\n", remote_num, data[2], scancode); if (scancode >= 0x70) { /* * This is either a mouse or scrollwheel event, depending on * the remote/keymap. * Get the keycode assigned to scancode 0x78/0x70. If it is * set, assume this is a scrollwheel up/down event. */ wheel_keycode = rc_g_keycode_from_table(ati_remote->rdev, scancode & 0x78); if (wheel_keycode == KEY_RESERVED) { /* scrollwheel was not mapped, assume mouse */ /* Look up event code index in the mouse translation * table. */ for (i = 0; ati_remote_tbl[i].kind != KIND_END; i++) { if (scancode == ati_remote_tbl[i].data) { index = i; break; } } } } if (index >= 0 && ati_remote_tbl[index].kind == KIND_LITERAL) { /* * The lsbit of the raw key code is a down/up flag. * Invert it to match the input layer's conventions. */ input_event(dev, EV_KEY, ati_remote_tbl[index].code, !(data[2] & 1)); ati_remote->old_jiffies = jiffies; } else if (index < 0 || ati_remote_tbl[index].kind == KIND_FILTERED) { unsigned long now = jiffies; /* Filter duplicate events which happen "too close" together. */ if (ati_remote->old_data == data[2] && time_before(now, ati_remote->old_jiffies + msecs_to_jiffies(repeat_filter))) { ati_remote->repeat_count++; } else { ati_remote->repeat_count = 0; ati_remote->first_jiffies = now; } ati_remote->old_jiffies = now; /* Ensure we skip at least the 4 first duplicate events * (generated by a single keypress), and continue skipping * until repeat_delay msecs have passed. */ if (ati_remote->repeat_count > 0 && (ati_remote->repeat_count < 5 || time_before(now, ati_remote->first_jiffies + msecs_to_jiffies(repeat_delay)))) return; if (index >= 0) { input_event(dev, EV_KEY, ati_remote_tbl[index].code, 1); input_event(dev, EV_KEY, ati_remote_tbl[index].code, 0); } else { /* Not a mouse event, hand it to rc-core. */ int count = 1; if (wheel_keycode != KEY_RESERVED) { /* * This is a scrollwheel event, send the * scroll up (0x78) / down (0x70) scancode * repeatedly as many times as indicated by * rest of the scancode. */ count = (scancode & 0x07) + 1; scancode &= 0x78; } while (count--) { /* * We don't use the rc-core repeat handling yet as * it would cause ghost repeats which would be a * regression for this driver. */ rc_keydown_notimeout(ati_remote->rdev, RC_PROTO_OTHER, scancode, data[2]); rc_keyup(ati_remote->rdev); } goto nosync; } } else if (ati_remote_tbl[index].kind == KIND_ACCEL) { signed char dx = ati_remote_tbl[index].code >> 8; signed char dy = ati_remote_tbl[index].code & 255; /* * Other event kinds are from the directional control pad, and * have an acceleration factor applied to them. Without this * acceleration, the control pad is mostly unusable. */ int acc = ati_remote_compute_accel(ati_remote); if (dx) input_report_rel(dev, REL_X, dx * acc); if (dy) input_report_rel(dev, REL_Y, dy * acc); ati_remote->old_jiffies = jiffies; } else { dev_dbg(&ati_remote->interface->dev, "ati_remote kind=%d\n", ati_remote_tbl[index].kind); return; } input_sync(dev); nosync: ati_remote->old_data = data[2]; } /* * ati_remote_irq_in */ static void ati_remote_irq_in(struct urb *urb) { struct ati_remote *ati_remote = urb->context; int retval; switch (urb->status) { case 0: /* success */ ati_remote_input_report(urb); break; case -ECONNRESET: /* unlink */ case -ENOENT: case -ESHUTDOWN: dev_dbg(&ati_remote->interface->dev, "%s: urb error status, unlink?\n", __func__); return; default: /* error */ dev_dbg(&ati_remote->interface->dev, "%s: Nonzero urb status %d\n", __func__, urb->status); } retval = usb_submit_urb(urb, GFP_ATOMIC); if (retval) dev_err(&ati_remote->interface->dev, "%s: usb_submit_urb()=%d\n", __func__, retval); } /* * ati_remote_alloc_buffers */ static int ati_remote_alloc_buffers(struct usb_device *udev, struct ati_remote *ati_remote) { ati_remote->inbuf = usb_alloc_coherent(udev, DATA_BUFSIZE, GFP_ATOMIC, &ati_remote->inbuf_dma); if (!ati_remote->inbuf) return -1; ati_remote->outbuf = usb_alloc_coherent(udev, DATA_BUFSIZE, GFP_ATOMIC, &ati_remote->outbuf_dma); if (!ati_remote->outbuf) return -1; ati_remote->irq_urb = usb_alloc_urb(0, GFP_KERNEL); if (!ati_remote->irq_urb) return -1; ati_remote->out_urb = usb_alloc_urb(0, GFP_KERNEL); if (!ati_remote->out_urb) return -1; return 0; } /* * ati_remote_free_buffers */ static void ati_remote_free_buffers(struct ati_remote *ati_remote) { usb_free_urb(ati_remote->irq_urb); usb_free_urb(ati_remote->out_urb); usb_free_coherent(ati_remote->udev, DATA_BUFSIZE, ati_remote->inbuf, ati_remote->inbuf_dma); usb_free_coherent(ati_remote->udev, DATA_BUFSIZE, ati_remote->outbuf, ati_remote->outbuf_dma); } static void ati_remote_input_init(struct ati_remote *ati_remote) { struct input_dev *idev = ati_remote->idev; int i; idev->evbit[0] = BIT_MASK(EV_KEY) | BIT_MASK(EV_REL); idev->keybit[BIT_WORD(BTN_MOUSE)] = BIT_MASK(BTN_LEFT) | BIT_MASK(BTN_RIGHT) | BIT_MASK(BTN_SIDE) | BIT_MASK(BTN_EXTRA); idev->relbit[0] = BIT_MASK(REL_X) | BIT_MASK(REL_Y); for (i = 0; ati_remote_tbl[i].kind != KIND_END; i++) if (ati_remote_tbl[i].kind == KIND_LITERAL || ati_remote_tbl[i].kind == KIND_FILTERED) __set_bit(ati_remote_tbl[i].code, idev->keybit); input_set_drvdata(idev, ati_remote); idev->open = ati_remote_input_open; idev->close = ati_remote_input_close; idev->name = ati_remote->mouse_name; idev->phys = ati_remote->mouse_phys; usb_to_input_id(ati_remote->udev, &idev->id); idev->dev.parent = &ati_remote->interface->dev; } static void ati_remote_rc_init(struct ati_remote *ati_remote) { struct rc_dev *rdev = ati_remote->rdev; rdev->priv = ati_remote; rdev->allowed_protocols = RC_PROTO_BIT_OTHER; rdev->driver_name = "ati_remote"; rdev->open = ati_remote_rc_open; rdev->close = ati_remote_rc_close; rdev->device_name = ati_remote->rc_name; rdev->input_phys = ati_remote->rc_phys; usb_to_input_id(ati_remote->udev, &rdev->input_id); rdev->dev.parent = &ati_remote->interface->dev; } static int ati_remote_initialize(struct ati_remote *ati_remote) { struct usb_device *udev = ati_remote->udev; int pipe, maxp; init_waitqueue_head(&ati_remote->wait); /* Set up irq_urb */ pipe = usb_rcvintpipe(udev, ati_remote->endpoint_in->bEndpointAddress); maxp = usb_maxpacket(udev, pipe); maxp = (maxp > DATA_BUFSIZE) ? DATA_BUFSIZE : maxp; usb_fill_int_urb(ati_remote->irq_urb, udev, pipe, ati_remote->inbuf, maxp, ati_remote_irq_in, ati_remote, ati_remote->endpoint_in->bInterval); ati_remote->irq_urb->transfer_dma = ati_remote->inbuf_dma; ati_remote->irq_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; /* Set up out_urb */ pipe = usb_sndintpipe(udev, ati_remote->endpoint_out->bEndpointAddress); maxp = usb_maxpacket(udev, pipe); maxp = (maxp > DATA_BUFSIZE) ? DATA_BUFSIZE : maxp; usb_fill_int_urb(ati_remote->out_urb, udev, pipe, ati_remote->outbuf, maxp, ati_remote_irq_out, ati_remote, ati_remote->endpoint_out->bInterval); ati_remote->out_urb->transfer_dma = ati_remote->outbuf_dma; ati_remote->out_urb->transfer_flags |= URB_NO_TRANSFER_DMA_MAP; /* send initialization strings */ if ((ati_remote_sendpacket(ati_remote, 0x8004, init1)) || (ati_remote_sendpacket(ati_remote, 0x8007, init2))) { dev_err(&ati_remote->interface->dev, "Initializing ati_remote hardware failed.\n"); return -EIO; } return 0; } /* * ati_remote_probe */ static int ati_remote_probe(struct usb_interface *interface, const struct usb_device_id *id) { struct usb_device *udev = interface_to_usbdev(interface); struct usb_host_interface *iface_host = interface->cur_altsetting; struct usb_endpoint_descriptor *endpoint_in, *endpoint_out; struct ati_receiver_type *type = (struct ati_receiver_type *)id->driver_info; struct ati_remote *ati_remote; struct input_dev *input_dev; struct device *device = &interface->dev; struct rc_dev *rc_dev; int err = -ENOMEM; if (iface_host->desc.bNumEndpoints != 2) { dev_err(device, "%s: Unexpected desc.bNumEndpoints\n", __func__); return -ENODEV; } endpoint_in = &iface_host->endpoint[0].desc; endpoint_out = &iface_host->endpoint[1].desc; if (!usb_endpoint_is_int_in(endpoint_in)) { dev_err(device, "%s: Unexpected endpoint_in\n", __func__); return -ENODEV; } if (le16_to_cpu(endpoint_in->wMaxPacketSize) == 0) { dev_err(device, "%s: endpoint_in message size==0?\n", __func__); return -ENODEV; } if (!usb_endpoint_is_int_out(endpoint_out)) { dev_err(device, "%s: Unexpected endpoint_out\n", __func__); return -ENODEV; } ati_remote = kzalloc(sizeof (struct ati_remote), GFP_KERNEL); rc_dev = rc_allocate_device(RC_DRIVER_SCANCODE); if (!ati_remote || !rc_dev) goto exit_free_dev_rdev; /* Allocate URB buffers, URBs */ if (ati_remote_alloc_buffers(udev, ati_remote)) goto exit_free_buffers; ati_remote->endpoint_in = endpoint_in; ati_remote->endpoint_out = endpoint_out; ati_remote->udev = udev; ati_remote->rdev = rc_dev; ati_remote->interface = interface; usb_make_path(udev, ati_remote->rc_phys, sizeof(ati_remote->rc_phys)); strscpy(ati_remote->mouse_phys, ati_remote->rc_phys, sizeof(ati_remote->mouse_phys)); strlcat(ati_remote->rc_phys, "/input0", sizeof(ati_remote->rc_phys)); strlcat(ati_remote->mouse_phys, "/input1", sizeof(ati_remote->mouse_phys)); snprintf(ati_remote->rc_name, sizeof(ati_remote->rc_name), "%s%s%s", udev->manufacturer ?: "", udev->manufacturer && udev->product ? " " : "", udev->product ?: ""); if (!strlen(ati_remote->rc_name)) snprintf(ati_remote->rc_name, sizeof(ati_remote->rc_name), DRIVER_DESC "(%04x,%04x)", le16_to_cpu(ati_remote->udev->descriptor.idVendor), le16_to_cpu(ati_remote->udev->descriptor.idProduct)); snprintf(ati_remote->mouse_name, sizeof(ati_remote->mouse_name), "%s mouse", ati_remote->rc_name); rc_dev->map_name = RC_MAP_ATI_X10; /* default map */ /* set default keymap according to receiver model */ if (type) { if (type->default_keymap) rc_dev->map_name = type->default_keymap; else if (type->get_default_keymap) rc_dev->map_name = type->get_default_keymap(interface); } ati_remote_rc_init(ati_remote); mutex_init(&ati_remote->open_mutex); /* Device Hardware Initialization - fills in ati_remote->idev from udev. */ err = ati_remote_initialize(ati_remote); if (err) goto exit_kill_urbs; /* Set up and register rc device */ err = rc_register_device(ati_remote->rdev); if (err) goto exit_kill_urbs; /* Set up and register mouse input device */ if (mouse) { input_dev = input_allocate_device(); if (!input_dev) { err = -ENOMEM; goto exit_unregister_device; } ati_remote->idev = input_dev; ati_remote_input_init(ati_remote); err = input_register_device(input_dev); if (err) goto exit_free_input_device; } usb_set_intfdata(interface, ati_remote); return 0; exit_free_input_device: input_free_device(input_dev); exit_unregister_device: rc_unregister_device(rc_dev); rc_dev = NULL; exit_kill_urbs: usb_kill_urb(ati_remote->irq_urb); usb_kill_urb(ati_remote->out_urb); exit_free_buffers: ati_remote_free_buffers(ati_remote); exit_free_dev_rdev: rc_free_device(rc_dev); kfree(ati_remote); return err; } /* * ati_remote_disconnect */ static void ati_remote_disconnect(struct usb_interface *interface) { struct ati_remote *ati_remote; ati_remote = usb_get_intfdata(interface); usb_set_intfdata(interface, NULL); if (!ati_remote) { dev_warn(&interface->dev, "%s - null device?\n", __func__); return; } usb_kill_urb(ati_remote->irq_urb); usb_kill_urb(ati_remote->out_urb); if (ati_remote->idev) input_unregister_device(ati_remote->idev); rc_unregister_device(ati_remote->rdev); ati_remote_free_buffers(ati_remote); kfree(ati_remote); } /* usb specific object to register with the usb subsystem */ static struct usb_driver ati_remote_driver = { .name = "ati_remote", .probe = ati_remote_probe, .disconnect = ati_remote_disconnect, .id_table = ati_remote_table, }; module_usb_driver(ati_remote_driver); MODULE_AUTHOR(DRIVER_AUTHOR); MODULE_DESCRIPTION(DRIVER_DESC); MODULE_LICENSE("GPL"); |
314 313 335 335 335 335 335 334 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 | // SPDX-License-Identifier: GPL-2.0-only #include <linux/user-return-notifier.h> #include <linux/percpu.h> #include <linux/sched.h> #include <linux/export.h> static DEFINE_PER_CPU(struct hlist_head, return_notifier_list); /* * Request a notification when the current cpu returns to userspace. Must be * called in atomic context. The notifier will also be called in atomic * context. */ void user_return_notifier_register(struct user_return_notifier *urn) { set_tsk_thread_flag(current, TIF_USER_RETURN_NOTIFY); hlist_add_head(&urn->link, this_cpu_ptr(&return_notifier_list)); } EXPORT_SYMBOL_GPL(user_return_notifier_register); /* * Removes a registered user return notifier. Must be called from atomic * context, and from the same cpu registration occurred in. */ void user_return_notifier_unregister(struct user_return_notifier *urn) { hlist_del(&urn->link); if (hlist_empty(this_cpu_ptr(&return_notifier_list))) clear_tsk_thread_flag(current, TIF_USER_RETURN_NOTIFY); } EXPORT_SYMBOL_GPL(user_return_notifier_unregister); /* Calls registered user return notifiers */ void fire_user_return_notifiers(void) { struct user_return_notifier *urn; struct hlist_node *tmp2; struct hlist_head *head; head = &get_cpu_var(return_notifier_list); hlist_for_each_entry_safe(urn, tmp2, head, link) urn->on_user_return(urn); put_cpu_var(return_notifier_list); } |
60 60 61 61 61 2 61 61 60 60 60 61 59 58 61 59 63 1 62 1 61 61 61 61 61 61 61 61 61 63 163 163 163 63 2 63 61 63 63 63 63 63 61 1 63 61 61 61 61 61 61 63 2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 | // SPDX-License-Identifier: GPL-2.0+ /* * linux/fs/jbd2/recovery.c * * Written by Stephen C. Tweedie <sct@redhat.com>, 1999 * * Copyright 1999-2000 Red Hat Software --- All Rights Reserved * * Journal recovery routines for the generic filesystem journaling code; * part of the ext2fs journaling system. */ #ifndef __KERNEL__ #include "jfs_user.h" #else #include <linux/time.h> #include <linux/fs.h> #include <linux/jbd2.h> #include <linux/errno.h> #include <linux/crc32.h> #include <linux/blkdev.h> #include <linux/string_choices.h> #endif /* * Maintain information about the progress of the recovery job, so that * the different passes can carry information between them. */ struct recovery_info { tid_t start_transaction; tid_t end_transaction; unsigned long head_block; int nr_replays; int nr_revokes; int nr_revoke_hits; }; static int do_one_pass(journal_t *journal, struct recovery_info *info, enum passtype pass); static int scan_revoke_records(journal_t *, enum passtype, struct buffer_head *, tid_t, struct recovery_info *); #ifdef __KERNEL__ /* Release readahead buffers after use */ static void journal_brelse_array(struct buffer_head *b[], int n) { while (--n >= 0) brelse (b[n]); } /* * When reading from the journal, we are going through the block device * layer directly and so there is no readahead being done for us. We * need to implement any readahead ourselves if we want it to happen at * all. Recovery is basically one long sequential read, so make sure we * do the IO in reasonably large chunks. * * This is not so critical that we need to be enormously clever about * the readahead size, though. 128K is a purely arbitrary, good-enough * fixed value. */ #define MAXBUF 8 static void do_readahead(journal_t *journal, unsigned int start) { unsigned int max, nbufs, next; unsigned long long blocknr; struct buffer_head *bh; struct buffer_head * bufs[MAXBUF]; /* Do up to 128K of readahead */ max = start + (128 * 1024 / journal->j_blocksize); if (max > journal->j_total_len) max = journal->j_total_len; /* Do the readahead itself. We'll submit MAXBUF buffer_heads at * a time to the block device IO layer. */ nbufs = 0; for (next = start; next < max; next++) { int err = jbd2_journal_bmap(journal, next, &blocknr); if (err) { printk(KERN_ERR "JBD2: bad block at offset %u\n", next); goto failed; } bh = __getblk(journal->j_dev, blocknr, journal->j_blocksize); if (!bh) goto failed; if (!buffer_uptodate(bh) && !buffer_locked(bh)) { bufs[nbufs++] = bh; if (nbufs == MAXBUF) { bh_readahead_batch(nbufs, bufs, 0); journal_brelse_array(bufs, nbufs); nbufs = 0; } } else brelse(bh); } if (nbufs) bh_readahead_batch(nbufs, bufs, 0); failed: if (nbufs) journal_brelse_array(bufs, nbufs); } #endif /* __KERNEL__ */ /* * Read a block from the journal */ static int jread(struct buffer_head **bhp, journal_t *journal, unsigned int offset) { int err; unsigned long long blocknr; struct buffer_head *bh; *bhp = NULL; if (offset >= journal->j_total_len) { printk(KERN_ERR "JBD2: corrupted journal superblock\n"); return -EFSCORRUPTED; } err = jbd2_journal_bmap(journal, offset, &blocknr); if (err) { printk(KERN_ERR "JBD2: bad block at offset %u\n", offset); return err; } bh = __getblk(journal->j_dev, blocknr, journal->j_blocksize); if (!bh) return -ENOMEM; if (!buffer_uptodate(bh)) { /* * If this is a brand new buffer, start readahead. * Otherwise, we assume we are already reading it. */ bool need_readahead = !buffer_req(bh); bh_read_nowait(bh, 0); if (need_readahead) do_readahead(journal, offset); wait_on_buffer(bh); } if (!buffer_uptodate(bh)) { printk(KERN_ERR "JBD2: Failed to read block at offset %u\n", offset); brelse(bh); return -EIO; } *bhp = bh; return 0; } static int jbd2_descriptor_block_csum_verify(journal_t *j, void *buf) { struct jbd2_journal_block_tail *tail; __be32 provided; __u32 calculated; if (!jbd2_journal_has_csum_v2or3(j)) return 1; tail = (struct jbd2_journal_block_tail *)((char *)buf + j->j_blocksize - sizeof(struct jbd2_journal_block_tail)); provided = tail->t_checksum; tail->t_checksum = 0; calculated = jbd2_chksum(j, j->j_csum_seed, buf, j->j_blocksize); tail->t_checksum = provided; return provided == cpu_to_be32(calculated); } /* * Count the number of in-use tags in a journal descriptor block. */ static int count_tags(journal_t *journal, struct buffer_head *bh) { char * tagp; journal_block_tag_t tag; int nr = 0, size = journal->j_blocksize; int tag_bytes = journal_tag_bytes(journal); if (jbd2_journal_has_csum_v2or3(journal)) size -= sizeof(struct jbd2_journal_block_tail); tagp = &bh->b_data[sizeof(journal_header_t)]; while ((tagp - bh->b_data + tag_bytes) <= size) { memcpy(&tag, tagp, sizeof(tag)); nr++; tagp += tag_bytes; if (!(tag.t_flags & cpu_to_be16(JBD2_FLAG_SAME_UUID))) tagp += 16; if (tag.t_flags & cpu_to_be16(JBD2_FLAG_LAST_TAG)) break; } return nr; } /* Make sure we wrap around the log correctly! */ #define wrap(journal, var) \ do { \ if (var >= (journal)->j_last) \ var -= ((journal)->j_last - (journal)->j_first); \ } while (0) static int fc_do_one_pass(journal_t *journal, struct recovery_info *info, enum passtype pass) { unsigned int expected_commit_id = info->end_transaction; unsigned long next_fc_block; struct buffer_head *bh; int err = 0; next_fc_block = journal->j_fc_first; if (!journal->j_fc_replay_callback) return 0; while (next_fc_block <= journal->j_fc_last) { jbd2_debug(3, "Fast commit replay: next block %ld\n", next_fc_block); err = jread(&bh, journal, next_fc_block); if (err) { jbd2_debug(3, "Fast commit replay: read error\n"); break; } err = journal->j_fc_replay_callback(journal, bh, pass, next_fc_block - journal->j_fc_first, expected_commit_id); brelse(bh); next_fc_block++; if (err < 0 || err == JBD2_FC_REPLAY_STOP) break; err = 0; } if (err) jbd2_debug(3, "Fast commit replay failed, err = %d\n", err); return err; } /** * jbd2_journal_recover - recovers a on-disk journal * @journal: the journal to recover * * The primary function for recovering the log contents when mounting a * journaled device. * * Recovery is done in three passes. In the first pass, we look for the * end of the log. In the second, we assemble the list of revoke * blocks. In the third and final pass, we replay any un-revoked blocks * in the log. */ int jbd2_journal_recover(journal_t *journal) { int err, err2; struct recovery_info info; memset(&info, 0, sizeof(info)); /* * The journal superblock's s_start field (the current log head) * is always zero if, and only if, the journal was cleanly * unmounted. We use its in-memory version j_tail here because * jbd2_journal_wipe() could have updated it without updating journal * superblock. */ if (!journal->j_tail) { journal_superblock_t *sb = journal->j_superblock; jbd2_debug(1, "No recovery required, last transaction %d, head block %u\n", be32_to_cpu(sb->s_sequence), be32_to_cpu(sb->s_head)); journal->j_transaction_sequence = be32_to_cpu(sb->s_sequence) + 1; journal->j_head = be32_to_cpu(sb->s_head); return 0; } err = do_one_pass(journal, &info, PASS_SCAN); if (!err) err = do_one_pass(journal, &info, PASS_REVOKE); if (!err) err = do_one_pass(journal, &info, PASS_REPLAY); jbd2_debug(1, "JBD2: recovery, exit status %d, " "recovered transactions %u to %u\n", err, info.start_transaction, info.end_transaction); jbd2_debug(1, "JBD2: Replayed %d and revoked %d/%d blocks\n", info.nr_replays, info.nr_revoke_hits, info.nr_revokes); /* Restart the log at the next transaction ID, thus invalidating * any existing commit records in the log. */ journal->j_transaction_sequence = ++info.end_transaction; journal->j_head = info.head_block; jbd2_debug(1, "JBD2: last transaction %d, head block %lu\n", journal->j_transaction_sequence, journal->j_head); jbd2_journal_clear_revoke(journal); /* Free revoke table allocated for replay */ if (journal->j_revoke != journal->j_revoke_table[0] && journal->j_revoke != journal->j_revoke_table[1]) { jbd2_journal_destroy_revoke_table(journal->j_revoke); journal->j_revoke = journal->j_revoke_table[1]; } err2 = sync_blockdev(journal->j_fs_dev); if (!err) err = err2; err2 = jbd2_check_fs_dev_write_error(journal); if (!err) err = err2; /* Make sure all replayed data is on permanent storage */ if (journal->j_flags & JBD2_BARRIER) { err2 = blkdev_issue_flush(journal->j_fs_dev); if (!err) err = err2; } return err; } /** * jbd2_journal_skip_recovery - Start journal and wipe exiting records * @journal: journal to startup * * Locate any valid recovery information from the journal and set up the * journal structures in memory to ignore it (presumably because the * caller has evidence that it is out of date). * This function doesn't appear to be exported.. * * We perform one pass over the journal to allow us to tell the user how * much recovery information is being erased, and to let us initialise * the journal transaction sequence numbers to the next unused ID. */ int jbd2_journal_skip_recovery(journal_t *journal) { int err; struct recovery_info info; memset (&info, 0, sizeof(info)); err = do_one_pass(journal, &info, PASS_SCAN); if (err) { printk(KERN_ERR "JBD2: error %d scanning journal\n", err); ++journal->j_transaction_sequence; journal->j_head = journal->j_first; } else { #ifdef CONFIG_JBD2_DEBUG int dropped = info.end_transaction - be32_to_cpu(journal->j_superblock->s_sequence); jbd2_debug(1, "JBD2: ignoring %d transaction%s from the journal.\n", dropped, str_plural(dropped)); #endif journal->j_transaction_sequence = ++info.end_transaction; journal->j_head = info.head_block; } journal->j_tail = 0; return err; } static inline unsigned long long read_tag_block(journal_t *journal, journal_block_tag_t *tag) { unsigned long long block = be32_to_cpu(tag->t_blocknr); if (jbd2_has_feature_64bit(journal)) block |= (u64)be32_to_cpu(tag->t_blocknr_high) << 32; return block; } /* * calc_chksums calculates the checksums for the blocks described in the * descriptor block. */ static int calc_chksums(journal_t *journal, struct buffer_head *bh, unsigned long *next_log_block, __u32 *crc32_sum) { int i, num_blks, err; unsigned long io_block; struct buffer_head *obh; num_blks = count_tags(journal, bh); /* Calculate checksum of the descriptor block. */ *crc32_sum = crc32_be(*crc32_sum, (void *)bh->b_data, bh->b_size); for (i = 0; i < num_blks; i++) { io_block = (*next_log_block)++; wrap(journal, *next_log_block); err = jread(&obh, journal, io_block); if (err) { printk(KERN_ERR "JBD2: IO error %d recovering block " "%lu in log\n", err, io_block); return 1; } else { *crc32_sum = crc32_be(*crc32_sum, (void *)obh->b_data, obh->b_size); } put_bh(obh); } return 0; } static int jbd2_commit_block_csum_verify(journal_t *j, void *buf) { struct commit_header *h; __be32 provided; __u32 calculated; if (!jbd2_journal_has_csum_v2or3(j)) return 1; h = buf; provided = h->h_chksum[0]; h->h_chksum[0] = 0; calculated = jbd2_chksum(j, j->j_csum_seed, buf, j->j_blocksize); h->h_chksum[0] = provided; return provided == cpu_to_be32(calculated); } static bool jbd2_commit_block_csum_verify_partial(journal_t *j, void *buf) { struct commit_header *h; __be32 provided; __u32 calculated; void *tmpbuf; tmpbuf = kzalloc(j->j_blocksize, GFP_KERNEL); if (!tmpbuf) return false; memcpy(tmpbuf, buf, sizeof(struct commit_header)); h = tmpbuf; provided = h->h_chksum[0]; h->h_chksum[0] = 0; calculated = jbd2_chksum(j, j->j_csum_seed, tmpbuf, j->j_blocksize); kfree(tmpbuf); return provided == cpu_to_be32(calculated); } static int jbd2_block_tag_csum_verify(journal_t *j, journal_block_tag_t *tag, journal_block_tag3_t *tag3, void *buf, __u32 sequence) { __u32 csum32; __be32 seq; if (!jbd2_journal_has_csum_v2or3(j)) return 1; seq = cpu_to_be32(sequence); csum32 = jbd2_chksum(j, j->j_csum_seed, (__u8 *)&seq, sizeof(seq)); csum32 = jbd2_chksum(j, csum32, buf, j->j_blocksize); if (jbd2_has_feature_csum3(j)) return tag3->t_checksum == cpu_to_be32(csum32); else return tag->t_checksum == cpu_to_be16(csum32); } static __always_inline int jbd2_do_replay(journal_t *journal, struct recovery_info *info, struct buffer_head *bh, unsigned long *next_log_block, unsigned int next_commit_ID) { char *tagp; int flags; int ret = 0; int tag_bytes = journal_tag_bytes(journal); int descr_csum_size = 0; unsigned long io_block; journal_block_tag_t tag; struct buffer_head *obh; struct buffer_head *nbh; if (jbd2_journal_has_csum_v2or3(journal)) descr_csum_size = sizeof(struct jbd2_journal_block_tail); tagp = &bh->b_data[sizeof(journal_header_t)]; while (tagp - bh->b_data + tag_bytes <= journal->j_blocksize - descr_csum_size) { int err; memcpy(&tag, tagp, sizeof(tag)); flags = be16_to_cpu(tag.t_flags); io_block = (*next_log_block)++; wrap(journal, *next_log_block); err = jread(&obh, journal, io_block); if (err) { /* Recover what we can, but report failure at the end. */ ret = err; pr_err("JBD2: IO error %d recovering block %lu in log\n", err, io_block); } else { unsigned long long blocknr; J_ASSERT(obh != NULL); blocknr = read_tag_block(journal, &tag); /* If the block has been revoked, then we're all done here. */ if (jbd2_journal_test_revoke(journal, blocknr, next_commit_ID)) { brelse(obh); ++info->nr_revoke_hits; goto skip_write; } /* Look for block corruption */ if (!jbd2_block_tag_csum_verify(journal, &tag, (journal_block_tag3_t *)tagp, obh->b_data, next_commit_ID)) { brelse(obh); ret = -EFSBADCRC; pr_err("JBD2: Invalid checksum recovering data block %llu in journal block %lu\n", blocknr, io_block); goto skip_write; } /* Find a buffer for the new data being restored */ nbh = __getblk(journal->j_fs_dev, blocknr, journal->j_blocksize); if (nbh == NULL) { pr_err("JBD2: Out of memory during recovery.\n"); brelse(obh); return -ENOMEM; } lock_buffer(nbh); memcpy(nbh->b_data, obh->b_data, journal->j_blocksize); if (flags & JBD2_FLAG_ESCAPE) { *((__be32 *)nbh->b_data) = cpu_to_be32(JBD2_MAGIC_NUMBER); } BUFFER_TRACE(nbh, "marking dirty"); set_buffer_uptodate(nbh); mark_buffer_dirty(nbh); BUFFER_TRACE(nbh, "marking uptodate"); ++info->nr_replays; unlock_buffer(nbh); brelse(obh); brelse(nbh); } skip_write: tagp += tag_bytes; if (!(flags & JBD2_FLAG_SAME_UUID)) tagp += 16; if (flags & JBD2_FLAG_LAST_TAG) break; } return ret; } static int do_one_pass(journal_t *journal, struct recovery_info *info, enum passtype pass) { unsigned int first_commit_ID, next_commit_ID; unsigned long next_log_block, head_block; int err, success = 0; journal_superblock_t * sb; journal_header_t * tmp; struct buffer_head *bh = NULL; unsigned int sequence; int blocktype; __u32 crc32_sum = ~0; /* Transactional Checksums */ bool need_check_commit_time = false; __u64 last_trans_commit_time = 0, commit_time; /* * First thing is to establish what we expect to find in the log * (in terms of transaction IDs), and where (in terms of log * block offsets): query the superblock. */ sb = journal->j_superblock; next_commit_ID = be32_to_cpu(sb->s_sequence); next_log_block = be32_to_cpu(sb->s_start); head_block = next_log_block; first_commit_ID = next_commit_ID; if (pass == PASS_SCAN) info->start_transaction = first_commit_ID; else if (pass == PASS_REVOKE) { /* * Would the default revoke table have too long hash chains * during replay? */ if (info->nr_revokes > JOURNAL_REVOKE_DEFAULT_HASH * 16) { unsigned int hash_size; /* * Aim for average chain length of 8, limit at 1M * entries to avoid problems with malicious * filesystems. */ hash_size = min(roundup_pow_of_two(info->nr_revokes / 8), 1U << 20); journal->j_revoke = jbd2_journal_init_revoke_table(hash_size); if (!journal->j_revoke) { printk(KERN_ERR "JBD2: failed to allocate revoke table for replay with %u entries. " "Journal replay may be slow.\n", hash_size); journal->j_revoke = journal->j_revoke_table[1]; } } } jbd2_debug(1, "Starting recovery pass %d\n", pass); /* * Now we walk through the log, transaction by transaction, * making sure that each transaction has a commit block in the * expected place. Each complete transaction gets replayed back * into the main filesystem. */ while (1) { cond_resched(); /* If we already know where to stop the log traversal, * check right now that we haven't gone past the end of * the log. */ if (pass != PASS_SCAN) if (tid_geq(next_commit_ID, info->end_transaction)) break; jbd2_debug(2, "Scanning for sequence ID %u at %lu/%lu\n", next_commit_ID, next_log_block, journal->j_last); /* Skip over each chunk of the transaction looking * either the next descriptor block or the final commit * record. */ jbd2_debug(3, "JBD2: checking block %ld\n", next_log_block); brelse(bh); bh = NULL; err = jread(&bh, journal, next_log_block); if (err) goto failed; next_log_block++; wrap(journal, next_log_block); /* What kind of buffer is it? * * If it is a descriptor block, check that it has the * expected sequence number. Otherwise, we're all done * here. */ tmp = (journal_header_t *)bh->b_data; if (tmp->h_magic != cpu_to_be32(JBD2_MAGIC_NUMBER)) break; blocktype = be32_to_cpu(tmp->h_blocktype); sequence = be32_to_cpu(tmp->h_sequence); jbd2_debug(3, "Found magic %d, sequence %d\n", blocktype, sequence); if (sequence != next_commit_ID) break; /* OK, we have a valid descriptor block which matches * all of the sequence number checks. What are we going * to do with it? That depends on the pass... */ switch(blocktype) { case JBD2_DESCRIPTOR_BLOCK: /* Verify checksum first */ if (!jbd2_descriptor_block_csum_verify(journal, bh->b_data)) { /* * PASS_SCAN can see stale blocks due to lazy * journal init. Don't error out on those yet. */ if (pass != PASS_SCAN) { pr_err("JBD2: Invalid checksum recovering block %lu in log\n", next_log_block); err = -EFSBADCRC; goto failed; } need_check_commit_time = true; jbd2_debug(1, "invalid descriptor block found in %lu\n", next_log_block); } /* If it is a valid descriptor block, replay it * in pass REPLAY; if journal_checksums enabled, then * calculate checksums in PASS_SCAN, otherwise, * just skip over the blocks it describes. */ if (pass != PASS_REPLAY) { if (pass == PASS_SCAN && jbd2_has_feature_checksum(journal) && !info->end_transaction) { if (calc_chksums(journal, bh, &next_log_block, &crc32_sum)) break; continue; } next_log_block += count_tags(journal, bh); wrap(journal, next_log_block); continue; } /* * A descriptor block: we can now write all of the * data blocks. Yay, useful work is finally getting * done here! */ err = jbd2_do_replay(journal, info, bh, &next_log_block, next_commit_ID); if (err) { if (err == -ENOMEM) goto failed; success = err; } continue; case JBD2_COMMIT_BLOCK: if (pass != PASS_SCAN) { next_commit_ID++; continue; } /* How to differentiate between interrupted commit * and journal corruption ? * * {nth transaction} * Checksum Verification Failed * | * ____________________ * | | * async_commit sync_commit * | | * | GO TO NEXT "Journal Corruption" * | TRANSACTION * | * {(n+1)th transanction} * | * _______|______________ * | | * Commit block found Commit block not found * | | * "Journal Corruption" | * _____________|_________ * | | * nth trans corrupt OR nth trans * and (n+1)th interrupted interrupted * before commit block * could reach the disk. * (Cannot find the difference in above * mentioned conditions. Hence assume * "Interrupted Commit".) */ commit_time = be64_to_cpu( ((struct commit_header *)bh->b_data)->h_commit_sec); /* * If need_check_commit_time is set, it means we are in * PASS_SCAN and csum verify failed before. If * commit_time is increasing, it's the same journal, * otherwise it is stale journal block, just end this * recovery. */ if (need_check_commit_time) { if (commit_time >= last_trans_commit_time) { pr_err("JBD2: Invalid checksum found in transaction %u\n", next_commit_ID); err = -EFSBADCRC; goto failed; } ignore_crc_mismatch: /* * It likely does not belong to same journal, * just end this recovery with success. */ jbd2_debug(1, "JBD2: Invalid checksum ignored in transaction %u, likely stale data\n", next_commit_ID); goto done; } /* * Found an expected commit block: if checksums * are present, verify them in PASS_SCAN; else not * much to do other than move on to the next sequence * number. */ if (jbd2_has_feature_checksum(journal)) { struct commit_header *cbh = (struct commit_header *)bh->b_data; unsigned found_chksum = be32_to_cpu(cbh->h_chksum[0]); if (info->end_transaction) { journal->j_failed_commit = info->end_transaction; break; } /* Neither checksum match nor unused? */ if (!((crc32_sum == found_chksum && cbh->h_chksum_type == JBD2_CRC32_CHKSUM && cbh->h_chksum_size == JBD2_CRC32_CHKSUM_SIZE) || (cbh->h_chksum_type == 0 && cbh->h_chksum_size == 0 && found_chksum == 0))) goto chksum_error; crc32_sum = ~0; goto chksum_ok; } if (jbd2_commit_block_csum_verify(journal, bh->b_data)) goto chksum_ok; if (jbd2_commit_block_csum_verify_partial(journal, bh->b_data)) { pr_notice("JBD2: Find incomplete commit block in transaction %u block %lu\n", next_commit_ID, next_log_block); goto chksum_ok; } chksum_error: if (commit_time < last_trans_commit_time) goto ignore_crc_mismatch; info->end_transaction = next_commit_ID; info->head_block = head_block; if (!jbd2_has_feature_async_commit(journal)) { journal->j_failed_commit = next_commit_ID; break; } chksum_ok: last_trans_commit_time = commit_time; head_block = next_log_block; next_commit_ID++; continue; case JBD2_REVOKE_BLOCK: /* * If we aren't in the SCAN or REVOKE pass, then we can * just skip over this block. */ if (pass != PASS_REVOKE && pass != PASS_SCAN) continue; /* * Check revoke block crc in pass_scan, if csum verify * failed, check commit block time later. */ if (pass == PASS_SCAN && !jbd2_descriptor_block_csum_verify(journal, bh->b_data)) { jbd2_debug(1, "JBD2: invalid revoke block found in %lu\n", next_log_block); need_check_commit_time = true; } err = scan_revoke_records(journal, pass, bh, next_commit_ID, info); if (err) goto failed; continue; default: jbd2_debug(3, "Unrecognised magic %d, end of scan.\n", blocktype); goto done; } } done: brelse(bh); /* * We broke out of the log scan loop: either we came to the * known end of the log or we found an unexpected block in the * log. If the latter happened, then we know that the "current" * transaction marks the end of the valid log. */ if (pass == PASS_SCAN) { if (!info->end_transaction) info->end_transaction = next_commit_ID; if (!info->head_block) info->head_block = head_block; } else { /* It's really bad news if different passes end up at * different places (but possible due to IO errors). */ if (info->end_transaction != next_commit_ID) { printk(KERN_ERR "JBD2: recovery pass %d ended at " "transaction %u, expected %u\n", pass, next_commit_ID, info->end_transaction); if (!success) success = -EIO; } } if (jbd2_has_feature_fast_commit(journal) && pass != PASS_REVOKE) { err = fc_do_one_pass(journal, info, pass); if (err) success = err; } return success; failed: brelse(bh); return err; } /* Scan a revoke record, marking all blocks mentioned as revoked. */ static int scan_revoke_records(journal_t *journal, enum passtype pass, struct buffer_head *bh, tid_t sequence, struct recovery_info *info) { jbd2_journal_revoke_header_t *header; int offset, max; unsigned csum_size = 0; __u32 rcount; int record_len = 4; header = (jbd2_journal_revoke_header_t *) bh->b_data; offset = sizeof(jbd2_journal_revoke_header_t); rcount = be32_to_cpu(header->r_count); if (jbd2_journal_has_csum_v2or3(journal)) csum_size = sizeof(struct jbd2_journal_block_tail); if (rcount > journal->j_blocksize - csum_size) return -EINVAL; max = rcount; if (jbd2_has_feature_64bit(journal)) record_len = 8; if (pass == PASS_SCAN) { info->nr_revokes += (max - offset) / record_len; return 0; } while (offset + record_len <= max) { unsigned long long blocknr; int err; if (record_len == 4) blocknr = be32_to_cpu(* ((__be32 *) (bh->b_data+offset))); else blocknr = be64_to_cpu(* ((__be64 *) (bh->b_data+offset))); offset += record_len; err = jbd2_journal_set_revoke(journal, blocknr, sequence); if (err) return err; } return 0; } |
10 10 10 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 | /* SPDX-License-Identifier: GPL-2.0-only */ /* * SM3 secure hash, as specified by OSCCA GM/T 0004-2012 SM3 and described * at https://datatracker.ietf.org/doc/html/draft-sca-cfrg-sm3-02 * * Copyright (C) 2017 ARM Limited or its affiliates. * Copyright (C) 2017 Gilad Ben-Yossef <gilad@benyossef.com> * Copyright (C) 2021 Tianjia Zhang <tianjia.zhang@linux.alibaba.com> */ #include <linux/module.h> #include <linux/unaligned.h> #include <crypto/sm3.h> static const u32 ____cacheline_aligned K[64] = { 0x79cc4519, 0xf3988a32, 0xe7311465, 0xce6228cb, 0x9cc45197, 0x3988a32f, 0x7311465e, 0xe6228cbc, 0xcc451979, 0x988a32f3, 0x311465e7, 0x6228cbce, 0xc451979c, 0x88a32f39, 0x11465e73, 0x228cbce6, 0x9d8a7a87, 0x3b14f50f, 0x7629ea1e, 0xec53d43c, 0xd8a7a879, 0xb14f50f3, 0x629ea1e7, 0xc53d43ce, 0x8a7a879d, 0x14f50f3b, 0x29ea1e76, 0x53d43cec, 0xa7a879d8, 0x4f50f3b1, 0x9ea1e762, 0x3d43cec5, 0x7a879d8a, 0xf50f3b14, 0xea1e7629, 0xd43cec53, 0xa879d8a7, 0x50f3b14f, 0xa1e7629e, 0x43cec53d, 0x879d8a7a, 0x0f3b14f5, 0x1e7629ea, 0x3cec53d4, 0x79d8a7a8, 0xf3b14f50, 0xe7629ea1, 0xcec53d43, 0x9d8a7a87, 0x3b14f50f, 0x7629ea1e, 0xec53d43c, 0xd8a7a879, 0xb14f50f3, 0x629ea1e7, 0xc53d43ce, 0x8a7a879d, 0x14f50f3b, 0x29ea1e76, 0x53d43cec, 0xa7a879d8, 0x4f50f3b1, 0x9ea1e762, 0x3d43cec5 }; /* * Transform the message X which consists of 16 32-bit-words. See * GM/T 004-2012 for details. */ #define R(i, a, b, c, d, e, f, g, h, t, w1, w2) \ do { \ ss1 = rol32((rol32((a), 12) + (e) + (t)), 7); \ ss2 = ss1 ^ rol32((a), 12); \ d += FF ## i(a, b, c) + ss2 + ((w1) ^ (w2)); \ h += GG ## i(e, f, g) + ss1 + (w1); \ b = rol32((b), 9); \ f = rol32((f), 19); \ h = P0((h)); \ } while (0) #define R1(a, b, c, d, e, f, g, h, t, w1, w2) \ R(1, a, b, c, d, e, f, g, h, t, w1, w2) #define R2(a, b, c, d, e, f, g, h, t, w1, w2) \ R(2, a, b, c, d, e, f, g, h, t, w1, w2) #define FF1(x, y, z) (x ^ y ^ z) #define FF2(x, y, z) ((x & y) | (x & z) | (y & z)) #define GG1(x, y, z) FF1(x, y, z) #define GG2(x, y, z) ((x & y) | (~x & z)) /* Message expansion */ #define P0(x) ((x) ^ rol32((x), 9) ^ rol32((x), 17)) #define P1(x) ((x) ^ rol32((x), 15) ^ rol32((x), 23)) #define I(i) (W[i] = get_unaligned_be32(data + i * 4)) #define W1(i) (W[i & 0x0f]) #define W2(i) (W[i & 0x0f] = \ P1(W[i & 0x0f] \ ^ W[(i-9) & 0x0f] \ ^ rol32(W[(i-3) & 0x0f], 15)) \ ^ rol32(W[(i-13) & 0x0f], 7) \ ^ W[(i-6) & 0x0f]) static void sm3_transform(struct sm3_state *sctx, u8 const *data, u32 W[16]) { u32 a, b, c, d, e, f, g, h, ss1, ss2; a = sctx->state[0]; b = sctx->state[1]; c = sctx->state[2]; d = sctx->state[3]; e = sctx->state[4]; f = sctx->state[5]; g = sctx->state[6]; h = sctx->state[7]; R1(a, b, c, d, e, f, g, h, K[0], I(0), I(4)); R1(d, a, b, c, h, e, f, g, K[1], I(1), I(5)); R1(c, d, a, b, g, h, e, f, K[2], I(2), I(6)); R1(b, c, d, a, f, g, h, e, K[3], I(3), I(7)); R1(a, b, c, d, e, f, g, h, K[4], W1(4), I(8)); R1(d, a, b, c, h, e, f, g, K[5], W1(5), I(9)); R1(c, d, a, b, g, h, e, f, K[6], W1(6), I(10)); R1(b, c, d, a, f, g, h, e, K[7], W1(7), I(11)); R1(a, b, c, d, e, f, g, h, K[8], W1(8), I(12)); R1(d, a, b, c, h, e, f, g, K[9], W1(9), I(13)); R1(c, d, a, b, g, h, e, f, K[10], W1(10), I(14)); R1(b, c, d, a, f, g, h, e, K[11], W1(11), I(15)); R1(a, b, c, d, e, f, g, h, K[12], W1(12), W2(16)); R1(d, a, b, c, h, e, f, g, K[13], W1(13), W2(17)); R1(c, d, a, b, g, h, e, f, K[14], W1(14), W2(18)); R1(b, c, d, a, f, g, h, e, K[15], W1(15), W2(19)); R2(a, b, c, d, e, f, g, h, K[16], W1(16), W2(20)); R2(d, a, b, c, h, e, f, g, K[17], W1(17), W2(21)); R2(c, d, a, b, g, h, e, f, K[18], W1(18), W2(22)); R2(b, c, d, a, f, g, h, e, K[19], W1(19), W2(23)); R2(a, b, c, d, e, f, g, h, K[20], W1(20), W2(24)); R2(d, a, b, c, h, e, f, g, K[21], W1(21), W2(25)); R2(c, d, a, b, g, h, e, f, K[22], W1(22), W2(26)); R2(b, c, d, a, f, g, h, e, K[23], W1(23), W2(27)); R2(a, b, c, d, e, f, g, h, K[24], W1(24), W2(28)); R2(d, a, b, c, h, e, f, g, K[25], W1(25), W2(29)); R2(c, d, a, b, g, h, e, f, K[26], W1(26), W2(30)); R2(b, c, d, a, f, g, h, e, K[27], W1(27), W2(31)); R2(a, b, c, d, e, f, g, h, K[28], W1(28), W2(32)); R2(d, a, b, c, h, e, f, g, K[29], W1(29), W2(33)); R2(c, d, a, b, g, h, e, f, K[30], W1(30), W2(34)); R2(b, c, d, a, f, g, h, e, K[31], W1(31), W2(35)); R2(a, b, c, d, e, f, g, h, K[32], W1(32), W2(36)); R2(d, a, b, c, h, e, f, g, K[33], W1(33), W2(37)); R2(c, d, a, b, g, h, e, f, K[34], W1(34), W2(38)); R2(b, c, d, a, f, g, h, e, K[35], W1(35), W2(39)); R2(a, b, c, d, e, f, g, h, K[36], W1(36), W2(40)); R2(d, a, b, c, h, e, f, g, K[37], W1(37), W2(41)); R2(c, d, a, b, g, h, e, f, K[38], W1(38), W2(42)); R2(b, c, d, a, f, g, h, e, K[39], W1(39), W2(43)); R2(a, b, c, d, e, f, g, h, K[40], W1(40), W2(44)); R2(d, a, b, c, h, e, f, g, K[41], W1(41), W2(45)); R2(c, d, a, b, g, h, e, f, K[42], W1(42), W2(46)); R2(b, c, d, a, f, g, h, e, K[43], W1(43), W2(47)); R2(a, b, c, d, e, f, g, h, K[44], W1(44), W2(48)); R2(d, a, b, c, h, e, f, g, K[45], W1(45), W2(49)); R2(c, d, a, b, g, h, e, f, K[46], W1(46), W2(50)); R2(b, c, d, a, f, g, h, e, K[47], W1(47), W2(51)); R2(a, b, c, d, e, f, g, h, K[48], W1(48), W2(52)); R2(d, a, b, c, h, e, f, g, K[49], W1(49), W2(53)); R2(c, d, a, b, g, h, e, f, K[50], W1(50), W2(54)); R2(b, c, d, a, f, g, h, e, K[51], W1(51), W2(55)); R2(a, b, c, d, e, f, g, h, K[52], W1(52), W2(56)); R2(d, a, b, c, h, e, f, g, K[53], W1(53), W2(57)); R2(c, d, a, b, g, h, e, f, K[54], W1(54), W2(58)); R2(b, c, d, a, f, g, h, e, K[55], W1(55), W2(59)); R2(a, b, c, d, e, f, g, h, K[56], W1(56), W2(60)); R2(d, a, b, c, h, e, f, g, K[57], W1(57), W2(61)); R2(c, d, a, b, g, h, e, f, K[58], W1(58), W2(62)); R2(b, c, d, a, f, g, h, e, K[59], W1(59), W2(63)); R2(a, b, c, d, e, f, g, h, K[60], W1(60), W2(64)); R2(d, a, b, c, h, e, f, g, K[61], W1(61), W2(65)); R2(c, d, a, b, g, h, e, f, K[62], W1(62), W2(66)); R2(b, c, d, a, f, g, h, e, K[63], W1(63), W2(67)); sctx->state[0] ^= a; sctx->state[1] ^= b; sctx->state[2] ^= c; sctx->state[3] ^= d; sctx->state[4] ^= e; sctx->state[5] ^= f; sctx->state[6] ^= g; sctx->state[7] ^= h; } #undef R #undef R1 #undef R2 #undef I #undef W1 #undef W2 static inline void sm3_block(struct sm3_state *sctx, u8 const *data, int blocks, u32 W[16]) { while (blocks--) { sm3_transform(sctx, data, W); data += SM3_BLOCK_SIZE; } } void sm3_update(struct sm3_state *sctx, const u8 *data, unsigned int len) { unsigned int partial = sctx->count % SM3_BLOCK_SIZE; u32 W[16]; sctx->count += len; if ((partial + len) >= SM3_BLOCK_SIZE) { int blocks; if (partial) { int p = SM3_BLOCK_SIZE - partial; memcpy(sctx->buffer + partial, data, p); data += p; len -= p; sm3_block(sctx, sctx->buffer, 1, W); } blocks = len / SM3_BLOCK_SIZE; len %= SM3_BLOCK_SIZE; if (blocks) { sm3_block(sctx, data, blocks, W); data += blocks * SM3_BLOCK_SIZE; } memzero_explicit(W, sizeof(W)); partial = 0; } if (len) memcpy(sctx->buffer + partial, data, len); } EXPORT_SYMBOL_GPL(sm3_update); void sm3_final(struct sm3_state *sctx, u8 *out) { const int bit_offset = SM3_BLOCK_SIZE - sizeof(u64); __be64 *bits = (__be64 *)(sctx->buffer + bit_offset); __be32 *digest = (__be32 *)out; unsigned int partial = sctx->count % SM3_BLOCK_SIZE; u32 W[16]; int i; sctx->buffer[partial++] = 0x80; if (partial > bit_offset) { memset(sctx->buffer + partial, 0, SM3_BLOCK_SIZE - partial); partial = 0; sm3_block(sctx, sctx->buffer, 1, W); } memset(sctx->buffer + partial, 0, bit_offset - partial); *bits = cpu_to_be64(sctx->count << 3); sm3_block(sctx, sctx->buffer, 1, W); for (i = 0; i < 8; i++) put_unaligned_be32(sctx->state[i], digest++); /* Zeroize sensitive information. */ memzero_explicit(W, sizeof(W)); memzero_explicit(sctx, sizeof(*sctx)); } EXPORT_SYMBOL_GPL(sm3_final); MODULE_DESCRIPTION("Generic SM3 library"); MODULE_LICENSE("GPL v2"); |
1 1 1 1 1 1 1 2 2 2 2 2 2 1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 | // SPDX-License-Identifier: GPL-2.0-only /* The industrial I/O core * * Copyright (c) 2008 Jonathan Cameron * * Handling of buffer allocation / resizing. * * Things to look at here. * - Better memory allocation techniques? * - Alternative access techniques? */ #include <linux/atomic.h> #include <linux/anon_inodes.h> #include <linux/cleanup.h> #include <linux/kernel.h> #include <linux/export.h> #include <linux/device.h> #include <linux/dma-buf.h> #include <linux/dma-fence.h> #include <linux/dma-resv.h> #include <linux/file.h> #include <linux/fs.h> #include <linux/cdev.h> #include <linux/slab.h> #include <linux/mm.h> #include <linux/poll.h> #include <linux/sched/signal.h> #include <linux/iio/iio.h> #include <linux/iio/iio-opaque.h> #include "iio_core.h" #include "iio_core_trigger.h" #include <linux/iio/sysfs.h> #include <linux/iio/buffer.h> #include <linux/iio/buffer_impl.h> #define DMABUF_ENQUEUE_TIMEOUT_MS 5000 MODULE_IMPORT_NS("DMA_BUF"); struct iio_dmabuf_priv { struct list_head entry; struct kref ref; struct iio_buffer *buffer; struct iio_dma_buffer_block *block; u64 context; /* Spinlock used for locking the dma_fence */ spinlock_t lock; struct dma_buf_attachment *attach; struct sg_table *sgt; enum dma_data_direction dir; atomic_t seqno; }; struct iio_dma_fence { struct dma_fence base; struct iio_dmabuf_priv *priv; struct work_struct work; }; static const char * const iio_endian_prefix[] = { [IIO_BE] = "be", [IIO_LE] = "le", }; static bool iio_buffer_is_active(struct iio_buffer *buf) { return !list_empty(&buf->buffer_list); } static size_t iio_buffer_data_available(struct iio_buffer *buf) { return buf->access->data_available(buf); } static int iio_buffer_flush_hwfifo(struct iio_dev *indio_dev, struct iio_buffer *buf, size_t required) { if (!indio_dev->info->hwfifo_flush_to_buffer) return -ENODEV; return indio_dev->info->hwfifo_flush_to_buffer(indio_dev, required); } static bool iio_buffer_ready(struct iio_dev *indio_dev, struct iio_buffer *buf, size_t to_wait, int to_flush) { size_t avail; int flushed = 0; /* wakeup if the device was unregistered */ if (!indio_dev->info) return true; /* drain the buffer if it was disabled */ if (!iio_buffer_is_active(buf)) { to_wait = min_t(size_t, to_wait, 1); to_flush = 0; } avail = iio_buffer_data_available(buf); if (avail >= to_wait) { /* force a flush for non-blocking reads */ if (!to_wait && avail < to_flush) iio_buffer_flush_hwfifo(indio_dev, buf, to_flush - avail); return true; } if (to_flush) flushed = iio_buffer_flush_hwfifo(indio_dev, buf, to_wait - avail); if (flushed <= 0) return false; if (avail + flushed >= to_wait) return true; return false; } /** * iio_buffer_read() - chrdev read for buffer access * @filp: File structure pointer for the char device * @buf: Destination buffer for iio buffer read * @n: First n bytes to read * @f_ps: Long offset provided by the user as a seek position * * This function relies on all buffer implementations having an * iio_buffer as their first element. * * Return: negative values corresponding to error codes or ret != 0 * for ending the reading activity **/ static ssize_t iio_buffer_read(struct file *filp, char __user *buf, size_t n, loff_t *f_ps) { struct iio_dev_buffer_pair *ib = filp->private_data; struct iio_buffer *rb = ib->buffer; struct iio_dev *indio_dev = ib->indio_dev; DEFINE_WAIT_FUNC(wait, woken_wake_function); size_t datum_size; size_t to_wait; int ret = 0; if (!indio_dev->info) return -ENODEV; if (!rb || !rb->access->read) return -EINVAL; if (rb->direction != IIO_BUFFER_DIRECTION_IN) return -EPERM; datum_size = rb->bytes_per_datum; /* * If datum_size is 0 there will never be anything to read from the * buffer, so signal end of file now. */ if (!datum_size) return 0; if (filp->f_flags & O_NONBLOCK) to_wait = 0; else to_wait = min_t(size_t, n / datum_size, rb->watermark); add_wait_queue(&rb->pollq, &wait); do { if (!indio_dev->info) { ret = -ENODEV; break; } if (!iio_buffer_ready(indio_dev, rb, to_wait, n / datum_size)) { if (signal_pending(current)) { ret = -ERESTARTSYS; break; } wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); continue; } ret = rb->access->read(rb, n, buf); if (ret == 0 && (filp->f_flags & O_NONBLOCK)) ret = -EAGAIN; } while (ret == 0); remove_wait_queue(&rb->pollq, &wait); return ret; } static size_t iio_buffer_space_available(struct iio_buffer *buf) { if (buf->access->space_available) return buf->access->space_available(buf); return SIZE_MAX; } static ssize_t iio_buffer_write(struct file *filp, const char __user *buf, size_t n, loff_t *f_ps) { struct iio_dev_buffer_pair *ib = filp->private_data; struct iio_buffer *rb = ib->buffer; struct iio_dev *indio_dev = ib->indio_dev; DEFINE_WAIT_FUNC(wait, woken_wake_function); int ret = 0; size_t written; if (!indio_dev->info) return -ENODEV; if (!rb || !rb->access->write) return -EINVAL; if (rb->direction != IIO_BUFFER_DIRECTION_OUT) return -EPERM; written = 0; add_wait_queue(&rb->pollq, &wait); do { if (!indio_dev->info) return -ENODEV; if (!iio_buffer_space_available(rb)) { if (signal_pending(current)) { ret = -ERESTARTSYS; break; } if (filp->f_flags & O_NONBLOCK) { if (!written) ret = -EAGAIN; break; } wait_woken(&wait, TASK_INTERRUPTIBLE, MAX_SCHEDULE_TIMEOUT); continue; } ret = rb->access->write(rb, n - written, buf + written); if (ret < 0) break; written += ret; } while (written != n); remove_wait_queue(&rb->pollq, &wait); return ret < 0 ? ret : written; } /** * iio_buffer_poll() - poll the buffer to find out if it has data * @filp: File structure pointer for device access * @wait: Poll table structure pointer for which the driver adds * a wait queue * * Return: (EPOLLIN | EPOLLRDNORM) if data is available for reading * or 0 for other cases */ static __poll_t iio_buffer_poll(struct file *filp, struct poll_table_struct *wait) { struct iio_dev_buffer_pair *ib = filp->private_data; struct iio_buffer *rb = ib->buffer; struct iio_dev *indio_dev = ib->indio_dev; if (!indio_dev->info || !rb) return 0; poll_wait(filp, &rb->pollq, wait); switch (rb->direction) { case IIO_BUFFER_DIRECTION_IN: if (iio_buffer_ready(indio_dev, rb, rb->watermark, 0)) return EPOLLIN | EPOLLRDNORM; break; case IIO_BUFFER_DIRECTION_OUT: if (iio_buffer_space_available(rb)) return EPOLLOUT | EPOLLWRNORM; break; } return 0; } ssize_t iio_buffer_read_wrapper(struct file *filp, char __user *buf, size_t n, loff_t *f_ps) { struct iio_dev_buffer_pair *ib = filp->private_data; struct iio_buffer *rb = ib->buffer; /* check if buffer was opened through new API */ if (test_bit(IIO_BUSY_BIT_POS, &rb->flags)) return -EBUSY; return iio_buffer_read(filp, buf, n, f_ps); } ssize_t iio_buffer_write_wrapper(struct file *filp, const char __user *buf, size_t n, loff_t *f_ps) { struct iio_dev_buffer_pair *ib = filp->private_data; struct iio_buffer *rb = ib->buffer; /* check if buffer was opened through new API */ if (test_bit(IIO_BUSY_BIT_POS, &rb->flags)) return -EBUSY; return iio_buffer_write(filp, buf, n, f_ps); } __poll_t iio_buffer_poll_wrapper(struct file *filp, struct poll_table_struct *wait) { struct iio_dev_buffer_pair *ib = filp->private_data; struct iio_buffer *rb = ib->buffer; /* check if buffer was opened through new API */ if (test_bit(IIO_BUSY_BIT_POS, &rb->flags)) return 0; return iio_buffer_poll(filp, wait); } /** * iio_buffer_wakeup_poll - Wakes up the buffer waitqueue * @indio_dev: The IIO device * * Wakes up the event waitqueue used for poll(). Should usually * be called when the device is unregistered. */ void iio_buffer_wakeup_poll(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer; unsigned int i; for (i = 0; i < iio_dev_opaque->attached_buffers_cnt; i++) { buffer = iio_dev_opaque->attached_buffers[i]; wake_up(&buffer->pollq); } } int iio_pop_from_buffer(struct iio_buffer *buffer, void *data) { if (!buffer || !buffer->access || !buffer->access->remove_from) return -EINVAL; return buffer->access->remove_from(buffer, data); } EXPORT_SYMBOL_GPL(iio_pop_from_buffer); void iio_buffer_init(struct iio_buffer *buffer) { INIT_LIST_HEAD(&buffer->demux_list); INIT_LIST_HEAD(&buffer->buffer_list); INIT_LIST_HEAD(&buffer->dmabufs); mutex_init(&buffer->dmabufs_mutex); init_waitqueue_head(&buffer->pollq); kref_init(&buffer->ref); if (!buffer->watermark) buffer->watermark = 1; } EXPORT_SYMBOL(iio_buffer_init); void iio_device_detach_buffers(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer; unsigned int i; for (i = 0; i < iio_dev_opaque->attached_buffers_cnt; i++) { buffer = iio_dev_opaque->attached_buffers[i]; iio_buffer_put(buffer); } kfree(iio_dev_opaque->attached_buffers); } static ssize_t iio_show_scan_index(struct device *dev, struct device_attribute *attr, char *buf) { return sysfs_emit(buf, "%u\n", to_iio_dev_attr(attr)->c->scan_index); } static ssize_t iio_show_fixed_type(struct device *dev, struct device_attribute *attr, char *buf) { struct iio_dev *indio_dev = dev_to_iio_dev(dev); struct iio_dev_attr *this_attr = to_iio_dev_attr(attr); const struct iio_scan_type *scan_type; u8 type; scan_type = iio_get_current_scan_type(indio_dev, this_attr->c); if (IS_ERR(scan_type)) return PTR_ERR(scan_type); type = scan_type->endianness; if (type == IIO_CPU) { #ifdef __LITTLE_ENDIAN type = IIO_LE; #else type = IIO_BE; #endif } if (scan_type->repeat > 1) return sysfs_emit(buf, "%s:%c%d/%dX%d>>%u\n", iio_endian_prefix[type], scan_type->sign, scan_type->realbits, scan_type->storagebits, scan_type->repeat, scan_type->shift); else return sysfs_emit(buf, "%s:%c%d/%d>>%u\n", iio_endian_prefix[type], scan_type->sign, scan_type->realbits, scan_type->storagebits, scan_type->shift); } static ssize_t iio_scan_el_show(struct device *dev, struct device_attribute *attr, char *buf) { int ret; struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; /* Ensure ret is 0 or 1. */ ret = !!test_bit(to_iio_dev_attr(attr)->address, buffer->scan_mask); return sysfs_emit(buf, "%d\n", ret); } /* Note NULL used as error indicator as it doesn't make sense. */ static const unsigned long *iio_scan_mask_match(const unsigned long *av_masks, unsigned int masklength, const unsigned long *mask, bool strict) { if (bitmap_empty(mask, masklength)) return NULL; /* * The condition here do not handle multi-long masks correctly. * It only checks the first long to be zero, and will use such mask * as a terminator even if there was bits set after the first long. * * Correct check would require using: * while (!bitmap_empty(av_masks, masklength)) * instead. This is potentially hazardous because the * avaliable_scan_masks is a zero terminated array of longs - and * using the proper bitmap_empty() check for multi-long wide masks * would require the array to be terminated with multiple zero longs - * which is not such an usual pattern. * * As writing of this no multi-long wide masks were found in-tree, so * the simple while (*av_masks) check is working. */ while (*av_masks) { if (strict) { if (bitmap_equal(mask, av_masks, masklength)) return av_masks; } else { if (bitmap_subset(mask, av_masks, masklength)) return av_masks; } av_masks += BITS_TO_LONGS(masklength); } return NULL; } static bool iio_validate_scan_mask(struct iio_dev *indio_dev, const unsigned long *mask) { if (!indio_dev->setup_ops->validate_scan_mask) return true; return indio_dev->setup_ops->validate_scan_mask(indio_dev, mask); } /** * iio_scan_mask_set() - set particular bit in the scan mask * @indio_dev: the iio device * @buffer: the buffer whose scan mask we are interested in * @bit: the bit to be set. * * Note that at this point we have no way of knowing what other * buffers might request, hence this code only verifies that the * individual buffers request is plausible. */ static int iio_scan_mask_set(struct iio_dev *indio_dev, struct iio_buffer *buffer, int bit) { unsigned int masklength = iio_get_masklength(indio_dev); const unsigned long *mask; unsigned long *trialmask; if (!masklength) { WARN(1, "Trying to set scanmask prior to registering buffer\n"); return -EINVAL; } trialmask = bitmap_alloc(masklength, GFP_KERNEL); if (!trialmask) return -ENOMEM; bitmap_copy(trialmask, buffer->scan_mask, masklength); set_bit(bit, trialmask); if (!iio_validate_scan_mask(indio_dev, trialmask)) goto err_invalid_mask; if (indio_dev->available_scan_masks) { mask = iio_scan_mask_match(indio_dev->available_scan_masks, masklength, trialmask, false); if (!mask) goto err_invalid_mask; } bitmap_copy(buffer->scan_mask, trialmask, masklength); bitmap_free(trialmask); return 0; err_invalid_mask: bitmap_free(trialmask); return -EINVAL; } static int iio_scan_mask_clear(struct iio_buffer *buffer, int bit) { clear_bit(bit, buffer->scan_mask); return 0; } static int iio_scan_mask_query(struct iio_dev *indio_dev, struct iio_buffer *buffer, int bit) { if (bit > iio_get_masklength(indio_dev)) return -EINVAL; if (!buffer->scan_mask) return 0; /* Ensure return value is 0 or 1. */ return !!test_bit(bit, buffer->scan_mask); }; static ssize_t iio_scan_el_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) { int ret; bool state; struct iio_dev *indio_dev = dev_to_iio_dev(dev); struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_dev_attr *this_attr = to_iio_dev_attr(attr); struct iio_buffer *buffer = this_attr->buffer; ret = kstrtobool(buf, &state); if (ret < 0) return ret; guard(mutex)(&iio_dev_opaque->mlock); if (iio_buffer_is_active(buffer)) return -EBUSY; ret = iio_scan_mask_query(indio_dev, buffer, this_attr->address); if (ret < 0) return ret; if (state && ret) return len; if (state) ret = iio_scan_mask_set(indio_dev, buffer, this_attr->address); else ret = iio_scan_mask_clear(buffer, this_attr->address); if (ret) return ret; return len; } static ssize_t iio_scan_el_ts_show(struct device *dev, struct device_attribute *attr, char *buf) { struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; return sysfs_emit(buf, "%d\n", buffer->scan_timestamp); } static ssize_t iio_scan_el_ts_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) { int ret; struct iio_dev *indio_dev = dev_to_iio_dev(dev); struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; bool state; ret = kstrtobool(buf, &state); if (ret < 0) return ret; guard(mutex)(&iio_dev_opaque->mlock); if (iio_buffer_is_active(buffer)) return -EBUSY; buffer->scan_timestamp = state; return len; } static int iio_buffer_add_channel_sysfs(struct iio_dev *indio_dev, struct iio_buffer *buffer, const struct iio_chan_spec *chan) { int ret, attrcount = 0; ret = __iio_add_chan_devattr("index", chan, &iio_show_scan_index, NULL, 0, IIO_SEPARATE, &indio_dev->dev, buffer, &buffer->buffer_attr_list); if (ret) return ret; attrcount++; ret = __iio_add_chan_devattr("type", chan, &iio_show_fixed_type, NULL, 0, IIO_SEPARATE, &indio_dev->dev, buffer, &buffer->buffer_attr_list); if (ret) return ret; attrcount++; if (chan->type != IIO_TIMESTAMP) ret = __iio_add_chan_devattr("en", chan, &iio_scan_el_show, &iio_scan_el_store, chan->scan_index, IIO_SEPARATE, &indio_dev->dev, buffer, &buffer->buffer_attr_list); else ret = __iio_add_chan_devattr("en", chan, &iio_scan_el_ts_show, &iio_scan_el_ts_store, chan->scan_index, IIO_SEPARATE, &indio_dev->dev, buffer, &buffer->buffer_attr_list); if (ret) return ret; attrcount++; ret = attrcount; return ret; } static ssize_t length_show(struct device *dev, struct device_attribute *attr, char *buf) { struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; return sysfs_emit(buf, "%d\n", buffer->length); } static ssize_t length_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) { struct iio_dev *indio_dev = dev_to_iio_dev(dev); struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; unsigned int val; int ret; ret = kstrtouint(buf, 10, &val); if (ret) return ret; if (val == buffer->length) return len; guard(mutex)(&iio_dev_opaque->mlock); if (iio_buffer_is_active(buffer)) return -EBUSY; buffer->access->set_length(buffer, val); if (buffer->length && buffer->length < buffer->watermark) buffer->watermark = buffer->length; return len; } static ssize_t enable_show(struct device *dev, struct device_attribute *attr, char *buf) { struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; return sysfs_emit(buf, "%d\n", iio_buffer_is_active(buffer)); } static int iio_storage_bytes_for_si(struct iio_dev *indio_dev, unsigned int scan_index) { const struct iio_chan_spec *ch; const struct iio_scan_type *scan_type; unsigned int bytes; ch = iio_find_channel_from_si(indio_dev, scan_index); scan_type = iio_get_current_scan_type(indio_dev, ch); if (IS_ERR(scan_type)) return PTR_ERR(scan_type); bytes = scan_type->storagebits / 8; if (scan_type->repeat > 1) bytes *= scan_type->repeat; return bytes; } static int iio_storage_bytes_for_timestamp(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); return iio_storage_bytes_for_si(indio_dev, iio_dev_opaque->scan_index_timestamp); } static int iio_compute_scan_bytes(struct iio_dev *indio_dev, const unsigned long *mask, bool timestamp) { unsigned int bytes = 0; int length, i, largest = 0; /* How much space will the demuxed element take? */ for_each_set_bit(i, mask, iio_get_masklength(indio_dev)) { length = iio_storage_bytes_for_si(indio_dev, i); if (length < 0) return length; bytes = ALIGN(bytes, length); bytes += length; largest = max(largest, length); } if (timestamp) { length = iio_storage_bytes_for_timestamp(indio_dev); if (length < 0) return length; bytes = ALIGN(bytes, length); bytes += length; largest = max(largest, length); } bytes = ALIGN(bytes, largest); return bytes; } static void iio_buffer_activate(struct iio_dev *indio_dev, struct iio_buffer *buffer) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); iio_buffer_get(buffer); list_add(&buffer->buffer_list, &iio_dev_opaque->buffer_list); } static void iio_buffer_deactivate(struct iio_buffer *buffer) { list_del_init(&buffer->buffer_list); wake_up_interruptible(&buffer->pollq); iio_buffer_put(buffer); } static void iio_buffer_deactivate_all(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer, *_buffer; list_for_each_entry_safe(buffer, _buffer, &iio_dev_opaque->buffer_list, buffer_list) iio_buffer_deactivate(buffer); } static int iio_buffer_enable(struct iio_buffer *buffer, struct iio_dev *indio_dev) { if (!buffer->access->enable) return 0; return buffer->access->enable(buffer, indio_dev); } static int iio_buffer_disable(struct iio_buffer *buffer, struct iio_dev *indio_dev) { if (!buffer->access->disable) return 0; return buffer->access->disable(buffer, indio_dev); } static void iio_buffer_update_bytes_per_datum(struct iio_dev *indio_dev, struct iio_buffer *buffer) { unsigned int bytes; if (!buffer->access->set_bytes_per_datum) return; bytes = iio_compute_scan_bytes(indio_dev, buffer->scan_mask, buffer->scan_timestamp); buffer->access->set_bytes_per_datum(buffer, bytes); } static int iio_buffer_request_update(struct iio_dev *indio_dev, struct iio_buffer *buffer) { int ret; iio_buffer_update_bytes_per_datum(indio_dev, buffer); if (buffer->access->request_update) { ret = buffer->access->request_update(buffer); if (ret) { dev_dbg(&indio_dev->dev, "Buffer not started: buffer parameter update failed (%d)\n", ret); return ret; } } return 0; } static void iio_free_scan_mask(struct iio_dev *indio_dev, const unsigned long *mask) { /* If the mask is dynamically allocated free it, otherwise do nothing */ if (!indio_dev->available_scan_masks) bitmap_free(mask); } struct iio_device_config { unsigned int mode; unsigned int watermark; const unsigned long *scan_mask; unsigned int scan_bytes; bool scan_timestamp; }; static int iio_verify_update(struct iio_dev *indio_dev, struct iio_buffer *insert_buffer, struct iio_buffer *remove_buffer, struct iio_device_config *config) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); unsigned int masklength = iio_get_masklength(indio_dev); unsigned long *compound_mask; const unsigned long *scan_mask; bool strict_scanmask = false; struct iio_buffer *buffer; bool scan_timestamp; unsigned int modes; if (insert_buffer && bitmap_empty(insert_buffer->scan_mask, masklength)) { dev_dbg(&indio_dev->dev, "At least one scan element must be enabled first\n"); return -EINVAL; } memset(config, 0, sizeof(*config)); config->watermark = ~0; /* * If there is just one buffer and we are removing it there is nothing * to verify. */ if (remove_buffer && !insert_buffer && list_is_singular(&iio_dev_opaque->buffer_list)) return 0; modes = indio_dev->modes; list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) { if (buffer == remove_buffer) continue; modes &= buffer->access->modes; config->watermark = min(config->watermark, buffer->watermark); } if (insert_buffer) { modes &= insert_buffer->access->modes; config->watermark = min(config->watermark, insert_buffer->watermark); } /* Definitely possible for devices to support both of these. */ if ((modes & INDIO_BUFFER_TRIGGERED) && indio_dev->trig) { config->mode = INDIO_BUFFER_TRIGGERED; } else if (modes & INDIO_BUFFER_HARDWARE) { /* * Keep things simple for now and only allow a single buffer to * be connected in hardware mode. */ if (insert_buffer && !list_empty(&iio_dev_opaque->buffer_list)) return -EINVAL; config->mode = INDIO_BUFFER_HARDWARE; strict_scanmask = true; } else if (modes & INDIO_BUFFER_SOFTWARE) { config->mode = INDIO_BUFFER_SOFTWARE; } else { /* Can only occur on first buffer */ if (indio_dev->modes & INDIO_BUFFER_TRIGGERED) dev_dbg(&indio_dev->dev, "Buffer not started: no trigger\n"); return -EINVAL; } /* What scan mask do we actually have? */ compound_mask = bitmap_zalloc(masklength, GFP_KERNEL); if (!compound_mask) return -ENOMEM; scan_timestamp = false; list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) { if (buffer == remove_buffer) continue; bitmap_or(compound_mask, compound_mask, buffer->scan_mask, masklength); scan_timestamp |= buffer->scan_timestamp; } if (insert_buffer) { bitmap_or(compound_mask, compound_mask, insert_buffer->scan_mask, masklength); scan_timestamp |= insert_buffer->scan_timestamp; } if (indio_dev->available_scan_masks) { scan_mask = iio_scan_mask_match(indio_dev->available_scan_masks, masklength, compound_mask, strict_scanmask); bitmap_free(compound_mask); if (!scan_mask) return -EINVAL; } else { scan_mask = compound_mask; } config->scan_bytes = iio_compute_scan_bytes(indio_dev, scan_mask, scan_timestamp); config->scan_mask = scan_mask; config->scan_timestamp = scan_timestamp; return 0; } /** * struct iio_demux_table - table describing demux memcpy ops * @from: index to copy from * @to: index to copy to * @length: how many bytes to copy * @l: list head used for management */ struct iio_demux_table { unsigned int from; unsigned int to; unsigned int length; struct list_head l; }; static void iio_buffer_demux_free(struct iio_buffer *buffer) { struct iio_demux_table *p, *q; list_for_each_entry_safe(p, q, &buffer->demux_list, l) { list_del(&p->l); kfree(p); } } static int iio_buffer_add_demux(struct iio_buffer *buffer, struct iio_demux_table **p, unsigned int in_loc, unsigned int out_loc, unsigned int length) { if (*p && (*p)->from + (*p)->length == in_loc && (*p)->to + (*p)->length == out_loc) { (*p)->length += length; } else { *p = kmalloc(sizeof(**p), GFP_KERNEL); if (!(*p)) return -ENOMEM; (*p)->from = in_loc; (*p)->to = out_loc; (*p)->length = length; list_add_tail(&(*p)->l, &buffer->demux_list); } return 0; } static int iio_buffer_update_demux(struct iio_dev *indio_dev, struct iio_buffer *buffer) { unsigned int masklength = iio_get_masklength(indio_dev); int ret, in_ind = -1, out_ind, length; unsigned int in_loc = 0, out_loc = 0; struct iio_demux_table *p = NULL; /* Clear out any old demux */ iio_buffer_demux_free(buffer); kfree(buffer->demux_bounce); buffer->demux_bounce = NULL; /* First work out which scan mode we will actually have */ if (bitmap_equal(indio_dev->active_scan_mask, buffer->scan_mask, masklength)) return 0; /* Now we have the two masks, work from least sig and build up sizes */ for_each_set_bit(out_ind, buffer->scan_mask, masklength) { in_ind = find_next_bit(indio_dev->active_scan_mask, masklength, in_ind + 1); while (in_ind != out_ind) { ret = iio_storage_bytes_for_si(indio_dev, in_ind); if (ret < 0) goto error_clear_mux_table; length = ret; /* Make sure we are aligned */ in_loc = roundup(in_loc, length) + length; in_ind = find_next_bit(indio_dev->active_scan_mask, masklength, in_ind + 1); } ret = iio_storage_bytes_for_si(indio_dev, in_ind); if (ret < 0) goto error_clear_mux_table; length = ret; out_loc = roundup(out_loc, length); in_loc = roundup(in_loc, length); ret = iio_buffer_add_demux(buffer, &p, in_loc, out_loc, length); if (ret) goto error_clear_mux_table; out_loc += length; in_loc += length; } /* Relies on scan_timestamp being last */ if (buffer->scan_timestamp) { ret = iio_storage_bytes_for_timestamp(indio_dev); if (ret < 0) goto error_clear_mux_table; length = ret; out_loc = roundup(out_loc, length); in_loc = roundup(in_loc, length); ret = iio_buffer_add_demux(buffer, &p, in_loc, out_loc, length); if (ret) goto error_clear_mux_table; out_loc += length; } buffer->demux_bounce = kzalloc(out_loc, GFP_KERNEL); if (!buffer->demux_bounce) { ret = -ENOMEM; goto error_clear_mux_table; } return 0; error_clear_mux_table: iio_buffer_demux_free(buffer); return ret; } static int iio_update_demux(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer; int ret; list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) { ret = iio_buffer_update_demux(indio_dev, buffer); if (ret < 0) goto error_clear_mux_table; } return 0; error_clear_mux_table: list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) iio_buffer_demux_free(buffer); return ret; } static int iio_enable_buffers(struct iio_dev *indio_dev, struct iio_device_config *config) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer, *tmp = NULL; int ret; indio_dev->active_scan_mask = config->scan_mask; ACCESS_PRIVATE(indio_dev, scan_timestamp) = config->scan_timestamp; indio_dev->scan_bytes = config->scan_bytes; iio_dev_opaque->currentmode = config->mode; iio_update_demux(indio_dev); /* Wind up again */ if (indio_dev->setup_ops->preenable) { ret = indio_dev->setup_ops->preenable(indio_dev); if (ret) { dev_dbg(&indio_dev->dev, "Buffer not started: buffer preenable failed (%d)\n", ret); goto err_undo_config; } } if (indio_dev->info->update_scan_mode) { ret = indio_dev->info ->update_scan_mode(indio_dev, indio_dev->active_scan_mask); if (ret < 0) { dev_dbg(&indio_dev->dev, "Buffer not started: update scan mode failed (%d)\n", ret); goto err_run_postdisable; } } if (indio_dev->info->hwfifo_set_watermark) indio_dev->info->hwfifo_set_watermark(indio_dev, config->watermark); list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) { ret = iio_buffer_enable(buffer, indio_dev); if (ret) { tmp = buffer; goto err_disable_buffers; } } if (iio_dev_opaque->currentmode == INDIO_BUFFER_TRIGGERED) { ret = iio_trigger_attach_poll_func(indio_dev->trig, indio_dev->pollfunc); if (ret) goto err_disable_buffers; } if (indio_dev->setup_ops->postenable) { ret = indio_dev->setup_ops->postenable(indio_dev); if (ret) { dev_dbg(&indio_dev->dev, "Buffer not started: postenable failed (%d)\n", ret); goto err_detach_pollfunc; } } return 0; err_detach_pollfunc: if (iio_dev_opaque->currentmode == INDIO_BUFFER_TRIGGERED) { iio_trigger_detach_poll_func(indio_dev->trig, indio_dev->pollfunc); } err_disable_buffers: buffer = list_prepare_entry(tmp, &iio_dev_opaque->buffer_list, buffer_list); list_for_each_entry_continue_reverse(buffer, &iio_dev_opaque->buffer_list, buffer_list) iio_buffer_disable(buffer, indio_dev); err_run_postdisable: if (indio_dev->setup_ops->postdisable) indio_dev->setup_ops->postdisable(indio_dev); err_undo_config: iio_dev_opaque->currentmode = INDIO_DIRECT_MODE; indio_dev->active_scan_mask = NULL; return ret; } static int iio_disable_buffers(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer; int ret = 0; int ret2; /* Wind down existing buffers - iff there are any */ if (list_empty(&iio_dev_opaque->buffer_list)) return 0; /* * If things go wrong at some step in disable we still need to continue * to perform the other steps, otherwise we leave the device in a * inconsistent state. We return the error code for the first error we * encountered. */ if (indio_dev->setup_ops->predisable) { ret2 = indio_dev->setup_ops->predisable(indio_dev); if (ret2 && !ret) ret = ret2; } if (iio_dev_opaque->currentmode == INDIO_BUFFER_TRIGGERED) { iio_trigger_detach_poll_func(indio_dev->trig, indio_dev->pollfunc); } list_for_each_entry(buffer, &iio_dev_opaque->buffer_list, buffer_list) { ret2 = iio_buffer_disable(buffer, indio_dev); if (ret2 && !ret) ret = ret2; } if (indio_dev->setup_ops->postdisable) { ret2 = indio_dev->setup_ops->postdisable(indio_dev); if (ret2 && !ret) ret = ret2; } iio_free_scan_mask(indio_dev, indio_dev->active_scan_mask); indio_dev->active_scan_mask = NULL; iio_dev_opaque->currentmode = INDIO_DIRECT_MODE; return ret; } static int __iio_update_buffers(struct iio_dev *indio_dev, struct iio_buffer *insert_buffer, struct iio_buffer *remove_buffer) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_device_config new_config; int ret; ret = iio_verify_update(indio_dev, insert_buffer, remove_buffer, &new_config); if (ret) return ret; if (insert_buffer) { ret = iio_buffer_request_update(indio_dev, insert_buffer); if (ret) goto err_free_config; } ret = iio_disable_buffers(indio_dev); if (ret) goto err_deactivate_all; if (remove_buffer) iio_buffer_deactivate(remove_buffer); if (insert_buffer) iio_buffer_activate(indio_dev, insert_buffer); /* If no buffers in list, we are done */ if (list_empty(&iio_dev_opaque->buffer_list)) return 0; ret = iio_enable_buffers(indio_dev, &new_config); if (ret) goto err_deactivate_all; return 0; err_deactivate_all: /* * We've already verified that the config is valid earlier. If things go * wrong in either enable or disable the most likely reason is an IO * error from the device. In this case there is no good recovery * strategy. Just make sure to disable everything and leave the device * in a sane state. With a bit of luck the device might come back to * life again later and userspace can try again. */ iio_buffer_deactivate_all(indio_dev); err_free_config: iio_free_scan_mask(indio_dev, new_config.scan_mask); return ret; } int iio_update_buffers(struct iio_dev *indio_dev, struct iio_buffer *insert_buffer, struct iio_buffer *remove_buffer) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); if (insert_buffer == remove_buffer) return 0; if (insert_buffer && insert_buffer->direction == IIO_BUFFER_DIRECTION_OUT) return -EINVAL; guard(mutex)(&iio_dev_opaque->info_exist_lock); guard(mutex)(&iio_dev_opaque->mlock); if (insert_buffer && iio_buffer_is_active(insert_buffer)) insert_buffer = NULL; if (remove_buffer && !iio_buffer_is_active(remove_buffer)) remove_buffer = NULL; if (!insert_buffer && !remove_buffer) return 0; if (!indio_dev->info) return -ENODEV; return __iio_update_buffers(indio_dev, insert_buffer, remove_buffer); } EXPORT_SYMBOL_GPL(iio_update_buffers); void iio_disable_all_buffers(struct iio_dev *indio_dev) { iio_disable_buffers(indio_dev); iio_buffer_deactivate_all(indio_dev); } static ssize_t enable_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) { int ret; bool requested_state; struct iio_dev *indio_dev = dev_to_iio_dev(dev); struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; bool inlist; ret = kstrtobool(buf, &requested_state); if (ret < 0) return ret; guard(mutex)(&iio_dev_opaque->mlock); /* Find out if it is in the list */ inlist = iio_buffer_is_active(buffer); /* Already in desired state */ if (inlist == requested_state) return len; if (requested_state) ret = __iio_update_buffers(indio_dev, buffer, NULL); else ret = __iio_update_buffers(indio_dev, NULL, buffer); if (ret) return ret; return len; } static ssize_t watermark_show(struct device *dev, struct device_attribute *attr, char *buf) { struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; return sysfs_emit(buf, "%u\n", buffer->watermark); } static ssize_t watermark_store(struct device *dev, struct device_attribute *attr, const char *buf, size_t len) { struct iio_dev *indio_dev = dev_to_iio_dev(dev); struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; unsigned int val; int ret; ret = kstrtouint(buf, 10, &val); if (ret) return ret; if (!val) return -EINVAL; guard(mutex)(&iio_dev_opaque->mlock); if (val > buffer->length) return -EINVAL; if (iio_buffer_is_active(buffer)) return -EBUSY; buffer->watermark = val; return len; } static ssize_t data_available_show(struct device *dev, struct device_attribute *attr, char *buf) { struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; return sysfs_emit(buf, "%zu\n", iio_buffer_data_available(buffer)); } static ssize_t direction_show(struct device *dev, struct device_attribute *attr, char *buf) { struct iio_buffer *buffer = to_iio_dev_attr(attr)->buffer; switch (buffer->direction) { case IIO_BUFFER_DIRECTION_IN: return sysfs_emit(buf, "in\n"); case IIO_BUFFER_DIRECTION_OUT: return sysfs_emit(buf, "out\n"); default: return -EINVAL; } } static DEVICE_ATTR_RW(length); static struct device_attribute dev_attr_length_ro = __ATTR_RO(length); static DEVICE_ATTR_RW(enable); static DEVICE_ATTR_RW(watermark); static struct device_attribute dev_attr_watermark_ro = __ATTR_RO(watermark); static DEVICE_ATTR_RO(data_available); static DEVICE_ATTR_RO(direction); /* * When adding new attributes here, put the at the end, at least until * the code that handles the length/length_ro & watermark/watermark_ro * assignments gets cleaned up. Otherwise these can create some weird * duplicate attributes errors under some setups. */ static struct attribute *iio_buffer_attrs[] = { &dev_attr_length.attr, &dev_attr_enable.attr, &dev_attr_watermark.attr, &dev_attr_data_available.attr, &dev_attr_direction.attr, }; #define to_dev_attr(_attr) container_of(_attr, struct device_attribute, attr) static struct attribute *iio_buffer_wrap_attr(struct iio_buffer *buffer, struct attribute *attr) { struct device_attribute *dattr = to_dev_attr(attr); struct iio_dev_attr *iio_attr; iio_attr = kzalloc(sizeof(*iio_attr), GFP_KERNEL); if (!iio_attr) return NULL; iio_attr->buffer = buffer; memcpy(&iio_attr->dev_attr, dattr, sizeof(iio_attr->dev_attr)); iio_attr->dev_attr.attr.name = kstrdup_const(attr->name, GFP_KERNEL); if (!iio_attr->dev_attr.attr.name) { kfree(iio_attr); return NULL; } sysfs_attr_init(&iio_attr->dev_attr.attr); list_add(&iio_attr->l, &buffer->buffer_attr_list); return &iio_attr->dev_attr.attr; } static int iio_buffer_register_legacy_sysfs_groups(struct iio_dev *indio_dev, struct attribute **buffer_attrs, int buffer_attrcount, int scan_el_attrcount) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct attribute_group *group; struct attribute **attrs; int ret; attrs = kcalloc(buffer_attrcount + 1, sizeof(*attrs), GFP_KERNEL); if (!attrs) return -ENOMEM; memcpy(attrs, buffer_attrs, buffer_attrcount * sizeof(*attrs)); group = &iio_dev_opaque->legacy_buffer_group; group->attrs = attrs; group->name = "buffer"; ret = iio_device_register_sysfs_group(indio_dev, group); if (ret) goto error_free_buffer_attrs; attrs = kcalloc(scan_el_attrcount + 1, sizeof(*attrs), GFP_KERNEL); if (!attrs) { ret = -ENOMEM; goto error_free_buffer_attrs; } memcpy(attrs, &buffer_attrs[buffer_attrcount], scan_el_attrcount * sizeof(*attrs)); group = &iio_dev_opaque->legacy_scan_el_group; group->attrs = attrs; group->name = "scan_elements"; ret = iio_device_register_sysfs_group(indio_dev, group); if (ret) goto error_free_scan_el_attrs; return 0; error_free_scan_el_attrs: kfree(iio_dev_opaque->legacy_scan_el_group.attrs); error_free_buffer_attrs: kfree(iio_dev_opaque->legacy_buffer_group.attrs); return ret; } static void iio_buffer_unregister_legacy_sysfs_groups(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); kfree(iio_dev_opaque->legacy_buffer_group.attrs); kfree(iio_dev_opaque->legacy_scan_el_group.attrs); } static void iio_buffer_dmabuf_release(struct kref *ref) { struct iio_dmabuf_priv *priv = container_of(ref, struct iio_dmabuf_priv, ref); struct dma_buf_attachment *attach = priv->attach; struct iio_buffer *buffer = priv->buffer; struct dma_buf *dmabuf = attach->dmabuf; dma_resv_lock(dmabuf->resv, NULL); dma_buf_unmap_attachment(attach, priv->sgt, priv->dir); dma_resv_unlock(dmabuf->resv); buffer->access->detach_dmabuf(buffer, priv->block); dma_buf_detach(attach->dmabuf, attach); dma_buf_put(dmabuf); kfree(priv); } static void iio_buffer_dmabuf_get(struct dma_buf_attachment *attach) { struct iio_dmabuf_priv *priv = attach->importer_priv; kref_get(&priv->ref); } static void iio_buffer_dmabuf_put(struct dma_buf_attachment *attach) { struct iio_dmabuf_priv *priv = attach->importer_priv; kref_put(&priv->ref, iio_buffer_dmabuf_release); } static int iio_buffer_chrdev_release(struct inode *inode, struct file *filep) { struct iio_dev_buffer_pair *ib = filep->private_data; struct iio_dev *indio_dev = ib->indio_dev; struct iio_buffer *buffer = ib->buffer; struct iio_dmabuf_priv *priv, *tmp; wake_up(&buffer->pollq); guard(mutex)(&buffer->dmabufs_mutex); /* Close all attached DMABUFs */ list_for_each_entry_safe(priv, tmp, &buffer->dmabufs, entry) { list_del_init(&priv->entry); iio_buffer_dmabuf_put(priv->attach); } kfree(ib); clear_bit(IIO_BUSY_BIT_POS, &buffer->flags); iio_device_put(indio_dev); return 0; } static int iio_dma_resv_lock(struct dma_buf *dmabuf, bool nonblock) { if (!nonblock) return dma_resv_lock_interruptible(dmabuf->resv, NULL); if (!dma_resv_trylock(dmabuf->resv)) return -EBUSY; return 0; } static struct dma_buf_attachment * iio_buffer_find_attachment(struct iio_dev_buffer_pair *ib, struct dma_buf *dmabuf, bool nonblock) { struct device *dev = ib->indio_dev->dev.parent; struct iio_buffer *buffer = ib->buffer; struct dma_buf_attachment *attach = NULL; struct iio_dmabuf_priv *priv; guard(mutex)(&buffer->dmabufs_mutex); list_for_each_entry(priv, &buffer->dmabufs, entry) { if (priv->attach->dev == dev && priv->attach->dmabuf == dmabuf) { attach = priv->attach; break; } } if (attach) iio_buffer_dmabuf_get(attach); return attach ?: ERR_PTR(-EPERM); } static int iio_buffer_attach_dmabuf(struct iio_dev_buffer_pair *ib, int __user *user_fd, bool nonblock) { struct iio_dev *indio_dev = ib->indio_dev; struct iio_buffer *buffer = ib->buffer; struct dma_buf_attachment *attach; struct iio_dmabuf_priv *priv, *each; struct dma_buf *dmabuf; int err, fd; if (!buffer->access->attach_dmabuf || !buffer->access->detach_dmabuf || !buffer->access->enqueue_dmabuf) return -EPERM; if (copy_from_user(&fd, user_fd, sizeof(fd))) return -EFAULT; priv = kzalloc(sizeof(*priv), GFP_KERNEL); if (!priv) return -ENOMEM; spin_lock_init(&priv->lock); priv->context = dma_fence_context_alloc(1); dmabuf = dma_buf_get(fd); if (IS_ERR(dmabuf)) { err = PTR_ERR(dmabuf); goto err_free_priv; } attach = dma_buf_attach(dmabuf, indio_dev->dev.parent); if (IS_ERR(attach)) { err = PTR_ERR(attach); goto err_dmabuf_put; } err = iio_dma_resv_lock(dmabuf, nonblock); if (err) goto err_dmabuf_detach; priv->dir = buffer->direction == IIO_BUFFER_DIRECTION_IN ? DMA_FROM_DEVICE : DMA_TO_DEVICE; priv->sgt = dma_buf_map_attachment(attach, priv->dir); if (IS_ERR(priv->sgt)) { err = PTR_ERR(priv->sgt); dev_err(&indio_dev->dev, "Unable to map attachment: %d\n", err); goto err_resv_unlock; } kref_init(&priv->ref); priv->buffer = buffer; priv->attach = attach; attach->importer_priv = priv; priv->block = buffer->access->attach_dmabuf(buffer, attach); if (IS_ERR(priv->block)) { err = PTR_ERR(priv->block); goto err_dmabuf_unmap_attachment; } dma_resv_unlock(dmabuf->resv); mutex_lock(&buffer->dmabufs_mutex); /* * Check whether we already have an attachment for this driver/DMABUF * combo. If we do, refuse to attach. */ list_for_each_entry(each, &buffer->dmabufs, entry) { if (each->attach->dev == indio_dev->dev.parent && each->attach->dmabuf == dmabuf) { /* * We unlocked the reservation object, so going through * the cleanup code would mean re-locking it first. * At this stage it is simpler to free the attachment * using iio_buffer_dma_put(). */ mutex_unlock(&buffer->dmabufs_mutex); iio_buffer_dmabuf_put(attach); return -EBUSY; } } /* Otherwise, add the new attachment to our dmabufs list. */ list_add(&priv->entry, &buffer->dmabufs); mutex_unlock(&buffer->dmabufs_mutex); return 0; err_dmabuf_unmap_attachment: dma_buf_unmap_attachment(attach, priv->sgt, priv->dir); err_resv_unlock: dma_resv_unlock(dmabuf->resv); err_dmabuf_detach: dma_buf_detach(dmabuf, attach); err_dmabuf_put: dma_buf_put(dmabuf); err_free_priv: kfree(priv); return err; } static int iio_buffer_detach_dmabuf(struct iio_dev_buffer_pair *ib, int __user *user_req, bool nonblock) { struct iio_buffer *buffer = ib->buffer; struct iio_dev *indio_dev = ib->indio_dev; struct iio_dmabuf_priv *priv; struct dma_buf *dmabuf; int dmabuf_fd, ret = -EPERM; if (copy_from_user(&dmabuf_fd, user_req, sizeof(dmabuf_fd))) return -EFAULT; dmabuf = dma_buf_get(dmabuf_fd); if (IS_ERR(dmabuf)) return PTR_ERR(dmabuf); guard(mutex)(&buffer->dmabufs_mutex); list_for_each_entry(priv, &buffer->dmabufs, entry) { if (priv->attach->dev == indio_dev->dev.parent && priv->attach->dmabuf == dmabuf) { list_del(&priv->entry); /* Unref the reference from iio_buffer_attach_dmabuf() */ iio_buffer_dmabuf_put(priv->attach); ret = 0; break; } } dma_buf_put(dmabuf); return ret; } static const char * iio_buffer_dma_fence_get_driver_name(struct dma_fence *fence) { return "iio"; } static void iio_buffer_dma_fence_release(struct dma_fence *fence) { struct iio_dma_fence *iio_fence = container_of(fence, struct iio_dma_fence, base); kfree(iio_fence); } static const struct dma_fence_ops iio_buffer_dma_fence_ops = { .get_driver_name = iio_buffer_dma_fence_get_driver_name, .get_timeline_name = iio_buffer_dma_fence_get_driver_name, .release = iio_buffer_dma_fence_release, }; static int iio_buffer_enqueue_dmabuf(struct iio_dev_buffer_pair *ib, struct iio_dmabuf __user *iio_dmabuf_req, bool nonblock) { struct iio_buffer *buffer = ib->buffer; struct iio_dmabuf iio_dmabuf; struct dma_buf_attachment *attach; struct iio_dmabuf_priv *priv; struct iio_dma_fence *fence; struct dma_buf *dmabuf; unsigned long timeout; bool cookie, cyclic, dma_to_ram; long retl; u32 seqno; int ret; if (copy_from_user(&iio_dmabuf, iio_dmabuf_req, sizeof(iio_dmabuf))) return -EFAULT; if (iio_dmabuf.flags & ~IIO_BUFFER_DMABUF_SUPPORTED_FLAGS) return -EINVAL; cyclic = iio_dmabuf.flags & IIO_BUFFER_DMABUF_CYCLIC; /* Cyclic flag is only supported on output buffers */ if (cyclic && buffer->direction != IIO_BUFFER_DIRECTION_OUT) return -EINVAL; dmabuf = dma_buf_get(iio_dmabuf.fd); if (IS_ERR(dmabuf)) return PTR_ERR(dmabuf); if (!iio_dmabuf.bytes_used || iio_dmabuf.bytes_used > dmabuf->size) { ret = -EINVAL; goto err_dmabuf_put; } attach = iio_buffer_find_attachment(ib, dmabuf, nonblock); if (IS_ERR(attach)) { ret = PTR_ERR(attach); goto err_dmabuf_put; } priv = attach->importer_priv; fence = kmalloc(sizeof(*fence), GFP_KERNEL); if (!fence) { ret = -ENOMEM; goto err_attachment_put; } fence->priv = priv; seqno = atomic_add_return(1, &priv->seqno); /* * The transfers are guaranteed to be processed in the order they are * enqueued, so we can use a simple incrementing sequence number for * the dma_fence. */ dma_fence_init(&fence->base, &iio_buffer_dma_fence_ops, &priv->lock, priv->context, seqno); ret = iio_dma_resv_lock(dmabuf, nonblock); if (ret) goto err_fence_put; timeout = nonblock ? 0 : msecs_to_jiffies(DMABUF_ENQUEUE_TIMEOUT_MS); dma_to_ram = buffer->direction == IIO_BUFFER_DIRECTION_IN; /* Make sure we don't have writers */ retl = dma_resv_wait_timeout(dmabuf->resv, dma_resv_usage_rw(dma_to_ram), true, timeout); if (retl == 0) retl = -EBUSY; if (retl < 0) { ret = (int)retl; goto err_resv_unlock; } if (buffer->access->lock_queue) buffer->access->lock_queue(buffer); ret = dma_resv_reserve_fences(dmabuf->resv, 1); if (ret) goto err_queue_unlock; dma_resv_add_fence(dmabuf->resv, &fence->base, dma_to_ram ? DMA_RESV_USAGE_WRITE : DMA_RESV_USAGE_READ); dma_resv_unlock(dmabuf->resv); cookie = dma_fence_begin_signalling(); ret = buffer->access->enqueue_dmabuf(buffer, priv->block, &fence->base, priv->sgt, iio_dmabuf.bytes_used, cyclic); if (ret) { /* * DMABUF enqueue failed, but we already added the fence. * Signal the error through the fence completion mechanism. */ iio_buffer_signal_dmabuf_done(&fence->base, ret); } if (buffer->access->unlock_queue) buffer->access->unlock_queue(buffer); dma_fence_end_signalling(cookie); dma_buf_put(dmabuf); return ret; err_queue_unlock: if (buffer->access->unlock_queue) buffer->access->unlock_queue(buffer); err_resv_unlock: dma_resv_unlock(dmabuf->resv); err_fence_put: dma_fence_put(&fence->base); err_attachment_put: iio_buffer_dmabuf_put(attach); err_dmabuf_put: dma_buf_put(dmabuf); return ret; } static void iio_buffer_cleanup(struct work_struct *work) { struct iio_dma_fence *fence = container_of(work, struct iio_dma_fence, work); struct iio_dmabuf_priv *priv = fence->priv; struct dma_buf_attachment *attach = priv->attach; dma_fence_put(&fence->base); iio_buffer_dmabuf_put(attach); } void iio_buffer_signal_dmabuf_done(struct dma_fence *fence, int ret) { struct iio_dma_fence *iio_fence = container_of(fence, struct iio_dma_fence, base); bool cookie = dma_fence_begin_signalling(); /* * Get a reference to the fence, so that it's not freed as soon as * it's signaled. */ dma_fence_get(fence); fence->error = ret; dma_fence_signal(fence); dma_fence_end_signalling(cookie); /* * The fence will be unref'd in iio_buffer_cleanup. * It can't be done here, as the unref functions might try to lock the * resv object, which can deadlock. */ INIT_WORK(&iio_fence->work, iio_buffer_cleanup); schedule_work(&iio_fence->work); } EXPORT_SYMBOL_GPL(iio_buffer_signal_dmabuf_done); static long iio_buffer_chrdev_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) { struct iio_dev_buffer_pair *ib = filp->private_data; void __user *_arg = (void __user *)arg; bool nonblock = filp->f_flags & O_NONBLOCK; switch (cmd) { case IIO_BUFFER_DMABUF_ATTACH_IOCTL: return iio_buffer_attach_dmabuf(ib, _arg, nonblock); case IIO_BUFFER_DMABUF_DETACH_IOCTL: return iio_buffer_detach_dmabuf(ib, _arg, nonblock); case IIO_BUFFER_DMABUF_ENQUEUE_IOCTL: return iio_buffer_enqueue_dmabuf(ib, _arg, nonblock); default: return -EINVAL; } } static const struct file_operations iio_buffer_chrdev_fileops = { .owner = THIS_MODULE, .llseek = noop_llseek, .read = iio_buffer_read, .write = iio_buffer_write, .unlocked_ioctl = iio_buffer_chrdev_ioctl, .compat_ioctl = compat_ptr_ioctl, .poll = iio_buffer_poll, .release = iio_buffer_chrdev_release, }; static long iio_device_buffer_getfd(struct iio_dev *indio_dev, unsigned long arg) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); int __user *ival = (int __user *)arg; struct iio_dev_buffer_pair *ib; struct iio_buffer *buffer; int fd, idx, ret; if (copy_from_user(&idx, ival, sizeof(idx))) return -EFAULT; if (idx >= iio_dev_opaque->attached_buffers_cnt) return -ENODEV; iio_device_get(indio_dev); buffer = iio_dev_opaque->attached_buffers[idx]; if (test_and_set_bit(IIO_BUSY_BIT_POS, &buffer->flags)) { ret = -EBUSY; goto error_iio_dev_put; } ib = kzalloc(sizeof(*ib), GFP_KERNEL); if (!ib) { ret = -ENOMEM; goto error_clear_busy_bit; } ib->indio_dev = indio_dev; ib->buffer = buffer; fd = anon_inode_getfd("iio:buffer", &iio_buffer_chrdev_fileops, ib, O_RDWR | O_CLOEXEC); if (fd < 0) { ret = fd; goto error_free_ib; } if (copy_to_user(ival, &fd, sizeof(fd))) { /* * "Leak" the fd, as there's not much we can do about this * anyway. 'fd' might have been closed already, as * anon_inode_getfd() called fd_install() on it, which made * it reachable by userland. * * Instead of allowing a malicious user to play tricks with * us, rely on the process exit path to do any necessary * cleanup, as in releasing the file, if still needed. */ return -EFAULT; } return 0; error_free_ib: kfree(ib); error_clear_busy_bit: clear_bit(IIO_BUSY_BIT_POS, &buffer->flags); error_iio_dev_put: iio_device_put(indio_dev); return ret; } static long iio_device_buffer_ioctl(struct iio_dev *indio_dev, struct file *filp, unsigned int cmd, unsigned long arg) { switch (cmd) { case IIO_BUFFER_GET_FD_IOCTL: return iio_device_buffer_getfd(indio_dev, arg); default: return IIO_IOCTL_UNHANDLED; } } static int iio_channel_validate_scan_type(struct device *dev, int ch, const struct iio_scan_type *scan_type) { /* Verify that sample bits fit into storage */ if (scan_type->storagebits < scan_type->realbits + scan_type->shift) { dev_err(dev, "Channel %d storagebits (%d) < shifted realbits (%d + %d)\n", ch, scan_type->storagebits, scan_type->realbits, scan_type->shift); return -EINVAL; } return 0; } static int __iio_buffer_alloc_sysfs_and_mask(struct iio_buffer *buffer, struct iio_dev *indio_dev, int index) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); unsigned int masklength = iio_get_masklength(indio_dev); struct iio_dev_attr *p; const struct iio_dev_attr *id_attr; struct attribute **attr; int ret, i, attrn, scan_el_attrcount, buffer_attrcount; const struct iio_chan_spec *channels; buffer_attrcount = 0; if (buffer->attrs) { while (buffer->attrs[buffer_attrcount]) buffer_attrcount++; } buffer_attrcount += ARRAY_SIZE(iio_buffer_attrs); scan_el_attrcount = 0; INIT_LIST_HEAD(&buffer->buffer_attr_list); channels = indio_dev->channels; if (channels) { /* new magic */ for (i = 0; i < indio_dev->num_channels; i++) { const struct iio_scan_type *scan_type; if (channels[i].scan_index < 0) continue; if (channels[i].has_ext_scan_type) { int j; /* * get_current_scan_type is required when using * extended scan types. */ if (!indio_dev->info->get_current_scan_type) { ret = -EINVAL; goto error_cleanup_dynamic; } for (j = 0; j < channels[i].num_ext_scan_type; j++) { scan_type = &channels[i].ext_scan_type[j]; ret = iio_channel_validate_scan_type( &indio_dev->dev, i, scan_type); if (ret) goto error_cleanup_dynamic; } } else { scan_type = &channels[i].scan_type; ret = iio_channel_validate_scan_type( &indio_dev->dev, i, scan_type); if (ret) goto error_cleanup_dynamic; } ret = iio_buffer_add_channel_sysfs(indio_dev, buffer, &channels[i]); if (ret < 0) goto error_cleanup_dynamic; scan_el_attrcount += ret; if (channels[i].type == IIO_TIMESTAMP) iio_dev_opaque->scan_index_timestamp = channels[i].scan_index; } if (masklength && !buffer->scan_mask) { buffer->scan_mask = bitmap_zalloc(masklength, GFP_KERNEL); if (!buffer->scan_mask) { ret = -ENOMEM; goto error_cleanup_dynamic; } } } attrn = buffer_attrcount + scan_el_attrcount; attr = kcalloc(attrn + 1, sizeof(*attr), GFP_KERNEL); if (!attr) { ret = -ENOMEM; goto error_free_scan_mask; } memcpy(attr, iio_buffer_attrs, sizeof(iio_buffer_attrs)); if (!buffer->access->set_length) attr[0] = &dev_attr_length_ro.attr; if (buffer->access->flags & INDIO_BUFFER_FLAG_FIXED_WATERMARK) attr[2] = &dev_attr_watermark_ro.attr; if (buffer->attrs) for (i = 0, id_attr = buffer->attrs[i]; (id_attr = buffer->attrs[i]); i++) attr[ARRAY_SIZE(iio_buffer_attrs) + i] = (struct attribute *)&id_attr->dev_attr.attr; buffer->buffer_group.attrs = attr; for (i = 0; i < buffer_attrcount; i++) { struct attribute *wrapped; wrapped = iio_buffer_wrap_attr(buffer, attr[i]); if (!wrapped) { ret = -ENOMEM; goto error_free_buffer_attrs; } attr[i] = wrapped; } attrn = 0; list_for_each_entry(p, &buffer->buffer_attr_list, l) attr[attrn++] = &p->dev_attr.attr; buffer->buffer_group.name = kasprintf(GFP_KERNEL, "buffer%d", index); if (!buffer->buffer_group.name) { ret = -ENOMEM; goto error_free_buffer_attrs; } ret = iio_device_register_sysfs_group(indio_dev, &buffer->buffer_group); if (ret) goto error_free_buffer_attr_group_name; /* we only need to register the legacy groups for the first buffer */ if (index > 0) return 0; ret = iio_buffer_register_legacy_sysfs_groups(indio_dev, attr, buffer_attrcount, scan_el_attrcount); if (ret) goto error_free_buffer_attr_group_name; return 0; error_free_buffer_attr_group_name: kfree(buffer->buffer_group.name); error_free_buffer_attrs: kfree(buffer->buffer_group.attrs); error_free_scan_mask: bitmap_free(buffer->scan_mask); error_cleanup_dynamic: iio_free_chan_devattr_list(&buffer->buffer_attr_list); return ret; } static void __iio_buffer_free_sysfs_and_mask(struct iio_buffer *buffer, struct iio_dev *indio_dev, int index) { if (index == 0) iio_buffer_unregister_legacy_sysfs_groups(indio_dev); bitmap_free(buffer->scan_mask); kfree(buffer->buffer_group.name); kfree(buffer->buffer_group.attrs); iio_free_chan_devattr_list(&buffer->buffer_attr_list); } int iio_buffers_alloc_sysfs_and_mask(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); const struct iio_chan_spec *channels; struct iio_buffer *buffer; int ret, i, idx; size_t sz; channels = indio_dev->channels; if (channels) { int ml = 0; for (i = 0; i < indio_dev->num_channels; i++) ml = max(ml, channels[i].scan_index + 1); ACCESS_PRIVATE(indio_dev, masklength) = ml; } if (!iio_dev_opaque->attached_buffers_cnt) return 0; for (idx = 0; idx < iio_dev_opaque->attached_buffers_cnt; idx++) { buffer = iio_dev_opaque->attached_buffers[idx]; ret = __iio_buffer_alloc_sysfs_and_mask(buffer, indio_dev, idx); if (ret) goto error_unwind_sysfs_and_mask; } sz = sizeof(*iio_dev_opaque->buffer_ioctl_handler); iio_dev_opaque->buffer_ioctl_handler = kzalloc(sz, GFP_KERNEL); if (!iio_dev_opaque->buffer_ioctl_handler) { ret = -ENOMEM; goto error_unwind_sysfs_and_mask; } iio_dev_opaque->buffer_ioctl_handler->ioctl = iio_device_buffer_ioctl; iio_device_ioctl_handler_register(indio_dev, iio_dev_opaque->buffer_ioctl_handler); return 0; error_unwind_sysfs_and_mask: while (idx--) { buffer = iio_dev_opaque->attached_buffers[idx]; __iio_buffer_free_sysfs_and_mask(buffer, indio_dev, idx); } return ret; } void iio_buffers_free_sysfs_and_mask(struct iio_dev *indio_dev) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer *buffer; int i; if (!iio_dev_opaque->attached_buffers_cnt) return; iio_device_ioctl_handler_unregister(iio_dev_opaque->buffer_ioctl_handler); kfree(iio_dev_opaque->buffer_ioctl_handler); for (i = iio_dev_opaque->attached_buffers_cnt - 1; i >= 0; i--) { buffer = iio_dev_opaque->attached_buffers[i]; __iio_buffer_free_sysfs_and_mask(buffer, indio_dev, i); } } /** * iio_validate_scan_mask_onehot() - Validates that exactly one channel is selected * @indio_dev: the iio device * @mask: scan mask to be checked * * Return true if exactly one bit is set in the scan mask, false otherwise. It * can be used for devices where only one channel can be active for sampling at * a time. */ bool iio_validate_scan_mask_onehot(struct iio_dev *indio_dev, const unsigned long *mask) { return bitmap_weight(mask, iio_get_masklength(indio_dev)) == 1; } EXPORT_SYMBOL_GPL(iio_validate_scan_mask_onehot); static const void *iio_demux(struct iio_buffer *buffer, const void *datain) { struct iio_demux_table *t; if (list_empty(&buffer->demux_list)) return datain; list_for_each_entry(t, &buffer->demux_list, l) memcpy(buffer->demux_bounce + t->to, datain + t->from, t->length); return buffer->demux_bounce; } static int iio_push_to_buffer(struct iio_buffer *buffer, const void *data) { const void *dataout = iio_demux(buffer, data); int ret; ret = buffer->access->store_to(buffer, dataout); if (ret) return ret; /* * We can't just test for watermark to decide if we wake the poll queue * because read may request less samples than the watermark. */ wake_up_interruptible_poll(&buffer->pollq, EPOLLIN | EPOLLRDNORM); return 0; } /** * iio_push_to_buffers() - push to a registered buffer. * @indio_dev: iio_dev structure for device. * @data: Full scan. */ int iio_push_to_buffers(struct iio_dev *indio_dev, const void *data) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); int ret; struct iio_buffer *buf; list_for_each_entry(buf, &iio_dev_opaque->buffer_list, buffer_list) { ret = iio_push_to_buffer(buf, data); if (ret < 0) return ret; } return 0; } EXPORT_SYMBOL_GPL(iio_push_to_buffers); /** * iio_push_to_buffers_with_ts_unaligned() - push to registered buffer, * no alignment or space requirements. * @indio_dev: iio_dev structure for device. * @data: channel data excluding the timestamp. * @data_sz: size of data. * @timestamp: timestamp for the sample data. * * This special variant of iio_push_to_buffers_with_timestamp() does * not require space for the timestamp, or 8 byte alignment of data. * It does however require an allocation on first call and additional * copies on all calls, so should be avoided if possible. */ int iio_push_to_buffers_with_ts_unaligned(struct iio_dev *indio_dev, const void *data, size_t data_sz, int64_t timestamp) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); /* * Conservative estimate - we can always safely copy the minimum * of either the data provided or the length of the destination buffer. * This relaxed limit allows the calling drivers to be lax about * tracking the size of the data they are pushing, at the cost of * unnecessary copying of padding. */ data_sz = min_t(size_t, indio_dev->scan_bytes, data_sz); if (iio_dev_opaque->bounce_buffer_size != indio_dev->scan_bytes) { void *bb; bb = devm_krealloc(&indio_dev->dev, iio_dev_opaque->bounce_buffer, indio_dev->scan_bytes, GFP_KERNEL); if (!bb) return -ENOMEM; iio_dev_opaque->bounce_buffer = bb; iio_dev_opaque->bounce_buffer_size = indio_dev->scan_bytes; } memcpy(iio_dev_opaque->bounce_buffer, data, data_sz); return iio_push_to_buffers_with_timestamp(indio_dev, iio_dev_opaque->bounce_buffer, timestamp); } EXPORT_SYMBOL_GPL(iio_push_to_buffers_with_ts_unaligned); /** * iio_buffer_release() - Free a buffer's resources * @ref: Pointer to the kref embedded in the iio_buffer struct * * This function is called when the last reference to the buffer has been * dropped. It will typically free all resources allocated by the buffer. Do not * call this function manually, always use iio_buffer_put() when done using a * buffer. */ static void iio_buffer_release(struct kref *ref) { struct iio_buffer *buffer = container_of(ref, struct iio_buffer, ref); mutex_destroy(&buffer->dmabufs_mutex); buffer->access->release(buffer); } /** * iio_buffer_get() - Grab a reference to the buffer * @buffer: The buffer to grab a reference for, may be NULL * * Returns the pointer to the buffer that was passed into the function. */ struct iio_buffer *iio_buffer_get(struct iio_buffer *buffer) { if (buffer) kref_get(&buffer->ref); return buffer; } EXPORT_SYMBOL_GPL(iio_buffer_get); /** * iio_buffer_put() - Release the reference to the buffer * @buffer: The buffer to release the reference for, may be NULL */ void iio_buffer_put(struct iio_buffer *buffer) { if (buffer) kref_put(&buffer->ref, iio_buffer_release); } EXPORT_SYMBOL_GPL(iio_buffer_put); /** * iio_device_attach_buffer - Attach a buffer to a IIO device * @indio_dev: The device the buffer should be attached to * @buffer: The buffer to attach to the device * * Return 0 if successful, negative if error. * * This function attaches a buffer to a IIO device. The buffer stays attached to * the device until the device is freed. For legacy reasons, the first attached * buffer will also be assigned to 'indio_dev->buffer'. * The array allocated here, will be free'd via the iio_device_detach_buffers() * call which is handled by the iio_device_free(). */ int iio_device_attach_buffer(struct iio_dev *indio_dev, struct iio_buffer *buffer) { struct iio_dev_opaque *iio_dev_opaque = to_iio_dev_opaque(indio_dev); struct iio_buffer **new, **old = iio_dev_opaque->attached_buffers; unsigned int cnt = iio_dev_opaque->attached_buffers_cnt; cnt++; new = krealloc(old, sizeof(*new) * cnt, GFP_KERNEL); if (!new) return -ENOMEM; iio_dev_opaque->attached_buffers = new; buffer = iio_buffer_get(buffer); /* first buffer is legacy; attach it to the IIO device directly */ if (!indio_dev->buffer) indio_dev->buffer = buffer; iio_dev_opaque->attached_buffers[cnt - 1] = buffer; iio_dev_opaque->attached_buffers_cnt = cnt; return 0; } EXPORT_SYMBOL_GPL(iio_device_attach_buffer); |
3 15 7 13 13 4 5 1 4 13 13 13 5 2 13 8 9 9 15 9 8 9 8 14 13 13 3 12 12 1 2 1 11 14 5 14 14 14 4 14 2 11 13 4 4 11 4 4 2 4 4 4 4 4 3 3 1 1 1 7 3 3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 | // SPDX-License-Identifier: GPL-2.0-or-later /* SCTP kernel implementation * (C) Copyright Red Hat Inc. 2017 * * This file is part of the SCTP kernel implementation * * These functions manipulate sctp stream queue/scheduling. * * Please send any bug reports or fixes you make to the * email addresched(es): * lksctp developers <linux-sctp@vger.kernel.org> * * Written or modified by: * Marcelo Ricardo Leitner <marcelo.leitner@gmail.com> */ #include <linux/list.h> #include <net/sctp/sctp.h> #include <net/sctp/sm.h> #include <net/sctp/stream_sched.h> /* Priority handling * RFC DRAFT ndata section 3.4 */ static void sctp_sched_prio_unsched_all(struct sctp_stream *stream); static struct sctp_stream_priorities *sctp_sched_prio_head_get(struct sctp_stream_priorities *p) { p->users++; return p; } static void sctp_sched_prio_head_put(struct sctp_stream_priorities *p) { if (p && --p->users == 0) kfree(p); } static struct sctp_stream_priorities *sctp_sched_prio_new_head( struct sctp_stream *stream, int prio, gfp_t gfp) { struct sctp_stream_priorities *p; p = kmalloc(sizeof(*p), gfp); if (!p) return NULL; INIT_LIST_HEAD(&p->prio_sched); INIT_LIST_HEAD(&p->active); p->next = NULL; p->prio = prio; p->users = 1; return p; } static struct sctp_stream_priorities *sctp_sched_prio_get_head( struct sctp_stream *stream, int prio, gfp_t gfp) { struct sctp_stream_priorities *p; int i; /* Look into scheduled priorities first, as they are sorted and * we can find it fast IF it's scheduled. */ list_for_each_entry(p, &stream->prio_list, prio_sched) { if (p->prio == prio) return sctp_sched_prio_head_get(p); if (p->prio > prio) break; } /* No luck. So we search on all streams now. */ for (i = 0; i < stream->outcnt; i++) { if (!SCTP_SO(stream, i)->ext) continue; p = SCTP_SO(stream, i)->ext->prio_head; if (!p) /* Means all other streams won't be initialized * as well. */ break; if (p->prio == prio) return sctp_sched_prio_head_get(p); } /* If not even there, allocate a new one. */ return sctp_sched_prio_new_head(stream, prio, gfp); } static void sctp_sched_prio_next_stream(struct sctp_stream_priorities *p) { struct list_head *pos; pos = p->next->prio_list.next; if (pos == &p->active) pos = pos->next; p->next = list_entry(pos, struct sctp_stream_out_ext, prio_list); } static bool sctp_sched_prio_unsched(struct sctp_stream_out_ext *soute) { bool scheduled = false; if (!list_empty(&soute->prio_list)) { struct sctp_stream_priorities *prio_head = soute->prio_head; /* Scheduled */ scheduled = true; if (prio_head->next == soute) /* Try to move to the next stream */ sctp_sched_prio_next_stream(prio_head); list_del_init(&soute->prio_list); /* Also unsched the priority if this was the last stream */ if (list_empty(&prio_head->active)) { list_del_init(&prio_head->prio_sched); /* If there is no stream left, clear next */ prio_head->next = NULL; } } return scheduled; } static void sctp_sched_prio_sched(struct sctp_stream *stream, struct sctp_stream_out_ext *soute) { struct sctp_stream_priorities *prio, *prio_head; prio_head = soute->prio_head; /* Nothing to do if already scheduled */ if (!list_empty(&soute->prio_list)) return; /* Schedule the stream. If there is a next, we schedule the new * one before it, so it's the last in round robin order. * If there isn't, we also have to schedule the priority. */ if (prio_head->next) { list_add(&soute->prio_list, prio_head->next->prio_list.prev); return; } list_add(&soute->prio_list, &prio_head->active); prio_head->next = soute; list_for_each_entry(prio, &stream->prio_list, prio_sched) { if (prio->prio > prio_head->prio) { list_add(&prio_head->prio_sched, prio->prio_sched.prev); return; } } list_add_tail(&prio_head->prio_sched, &stream->prio_list); } static int sctp_sched_prio_set(struct sctp_stream *stream, __u16 sid, __u16 prio, gfp_t gfp) { struct sctp_stream_out *sout = SCTP_SO(stream, sid); struct sctp_stream_out_ext *soute = sout->ext; struct sctp_stream_priorities *prio_head, *old; bool reschedule = false; old = soute->prio_head; if (old && old->prio == prio) return 0; prio_head = sctp_sched_prio_get_head(stream, prio, gfp); if (!prio_head) return -ENOMEM; reschedule = sctp_sched_prio_unsched(soute); soute->prio_head = prio_head; if (reschedule) sctp_sched_prio_sched(stream, soute); sctp_sched_prio_head_put(old); return 0; } static int sctp_sched_prio_get(struct sctp_stream *stream, __u16 sid, __u16 *value) { *value = SCTP_SO(stream, sid)->ext->prio_head->prio; return 0; } static int sctp_sched_prio_init(struct sctp_stream *stream) { INIT_LIST_HEAD(&stream->prio_list); return 0; } static int sctp_sched_prio_init_sid(struct sctp_stream *stream, __u16 sid, gfp_t gfp) { INIT_LIST_HEAD(&SCTP_SO(stream, sid)->ext->prio_list); return sctp_sched_prio_set(stream, sid, 0, gfp); } static void sctp_sched_prio_free_sid(struct sctp_stream *stream, __u16 sid) { sctp_sched_prio_head_put(SCTP_SO(stream, sid)->ext->prio_head); SCTP_SO(stream, sid)->ext->prio_head = NULL; } static void sctp_sched_prio_enqueue(struct sctp_outq *q, struct sctp_datamsg *msg) { struct sctp_stream *stream; struct sctp_chunk *ch; __u16 sid; ch = list_first_entry(&msg->chunks, struct sctp_chunk, frag_list); sid = sctp_chunk_stream_no(ch); stream = &q->asoc->stream; sctp_sched_prio_sched(stream, SCTP_SO(stream, sid)->ext); } static struct sctp_chunk *sctp_sched_prio_dequeue(struct sctp_outq *q) { struct sctp_stream *stream = &q->asoc->stream; struct sctp_stream_priorities *prio; struct sctp_stream_out_ext *soute; struct sctp_chunk *ch = NULL; /* Bail out quickly if queue is empty */ if (list_empty(&q->out_chunk_list)) goto out; /* Find which chunk is next. It's easy, it's either the current * one or the first chunk on the next active stream. */ if (stream->out_curr) { soute = stream->out_curr->ext; } else { prio = list_entry(stream->prio_list.next, struct sctp_stream_priorities, prio_sched); soute = prio->next; } ch = list_entry(soute->outq.next, struct sctp_chunk, stream_list); sctp_sched_dequeue_common(q, ch); out: return ch; } static void sctp_sched_prio_dequeue_done(struct sctp_outq *q, struct sctp_chunk *ch) { struct sctp_stream_priorities *prio; struct sctp_stream_out_ext *soute; __u16 sid; /* Last chunk on that msg, move to the next stream on * this priority. */ sid = sctp_chunk_stream_no(ch); soute = SCTP_SO(&q->asoc->stream, sid)->ext; prio = soute->prio_head; sctp_sched_prio_next_stream(prio); if (list_empty(&soute->outq)) sctp_sched_prio_unsched(soute); } static void sctp_sched_prio_sched_all(struct sctp_stream *stream) { struct sctp_association *asoc; struct sctp_stream_out *sout; struct sctp_chunk *ch; asoc = container_of(stream, struct sctp_association, stream); list_for_each_entry(ch, &asoc->outqueue.out_chunk_list, list) { __u16 sid; sid = sctp_chunk_stream_no(ch); sout = SCTP_SO(stream, sid); if (sout->ext) sctp_sched_prio_sched(stream, sout->ext); } } static void sctp_sched_prio_unsched_all(struct sctp_stream *stream) { struct sctp_stream_priorities *p, *tmp; struct sctp_stream_out_ext *soute, *souttmp; list_for_each_entry_safe(p, tmp, &stream->prio_list, prio_sched) list_for_each_entry_safe(soute, souttmp, &p->active, prio_list) sctp_sched_prio_unsched(soute); } static struct sctp_sched_ops sctp_sched_prio = { .set = sctp_sched_prio_set, .get = sctp_sched_prio_get, .init = sctp_sched_prio_init, .init_sid = sctp_sched_prio_init_sid, .free_sid = sctp_sched_prio_free_sid, .enqueue = sctp_sched_prio_enqueue, .dequeue = sctp_sched_prio_dequeue, .dequeue_done = sctp_sched_prio_dequeue_done, .sched_all = sctp_sched_prio_sched_all, .unsched_all = sctp_sched_prio_unsched_all, }; void sctp_sched_ops_prio_init(void) { sctp_sched_ops_register(SCTP_SS_PRIO, &sctp_sched_prio); } |
3 1 1 3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | // SPDX-License-Identifier: GPL-2.0 #include <linux/kernel.h> #include <linux/errno.h> #include <linux/fs.h> #include <linux/file.h> #include <linux/mm.h> #include <linux/slab.h> #include <linux/syscalls.h> #include <linux/io_uring.h> #include <uapi/linux/io_uring.h> #include "../fs/internal.h" #include "io_uring.h" #include "truncate.h" struct io_ftrunc { struct file *file; loff_t len; }; int io_ftruncate_prep(struct io_kiocb *req, const struct io_uring_sqe *sqe) { struct io_ftrunc *ft = io_kiocb_to_cmd(req, struct io_ftrunc); if (sqe->rw_flags || sqe->addr || sqe->len || sqe->buf_index || sqe->splice_fd_in || sqe->addr3) return -EINVAL; ft->len = READ_ONCE(sqe->off); req->flags |= REQ_F_FORCE_ASYNC; return 0; } int io_ftruncate(struct io_kiocb *req, unsigned int issue_flags) { struct io_ftrunc *ft = io_kiocb_to_cmd(req, struct io_ftrunc); int ret; WARN_ON_ONCE(issue_flags & IO_URING_F_NONBLOCK); ret = do_ftruncate(req->file, ft->len, 1); io_req_set_res(req, ret, 0); return IOU_OK; } |
70 70 70 69 70 70 70 70 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 | /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _LINUX_SEQ_BUF_H #define _LINUX_SEQ_BUF_H #include <linux/bug.h> #include <linux/minmax.h> #include <linux/seq_file.h> #include <linux/types.h> /* * Trace sequences are used to allow a function to call several other functions * to create a string of data to use. */ /** * struct seq_buf - seq buffer structure * @buffer: pointer to the buffer * @size: size of the buffer * @len: the amount of data inside the buffer */ struct seq_buf { char *buffer; size_t size; size_t len; }; #define DECLARE_SEQ_BUF(NAME, SIZE) \ struct seq_buf NAME = { \ .buffer = (char[SIZE]) { 0 }, \ .size = SIZE, \ } static inline void seq_buf_clear(struct seq_buf *s) { s->len = 0; if (s->size) s->buffer[0] = '\0'; } static inline void seq_buf_init(struct seq_buf *s, char *buf, unsigned int size) { s->buffer = buf; s->size = size; seq_buf_clear(s); } /* * seq_buf have a buffer that might overflow. When this happens * len is set to be greater than size. */ static inline bool seq_buf_has_overflowed(struct seq_buf *s) { return s->len > s->size; } static inline void seq_buf_set_overflow(struct seq_buf *s) { s->len = s->size + 1; } /* * How much buffer is left on the seq_buf? */ static inline unsigned int seq_buf_buffer_left(struct seq_buf *s) { if (seq_buf_has_overflowed(s)) return 0; return s->size - s->len; } /* How much buffer was written? */ static inline unsigned int seq_buf_used(struct seq_buf *s) { return min(s->len, s->size); } /** * seq_buf_str - get NUL-terminated C string from seq_buf * @s: the seq_buf handle * * This makes sure that the buffer in @s is NUL-terminated and * safe to read as a string. * * Note, if this is called when the buffer has overflowed, then * the last byte of the buffer is zeroed, and the len will still * point passed it. * * After this function is called, s->buffer is safe to use * in string operations. * * Returns: @s->buf after making sure it is terminated. */ static inline const char *seq_buf_str(struct seq_buf *s) { if (WARN_ON(s->size == 0)) return ""; if (seq_buf_buffer_left(s)) s->buffer[s->len] = 0; else s->buffer[s->size - 1] = 0; return s->buffer; } /** * seq_buf_get_buf - get buffer to write arbitrary data to * @s: the seq_buf handle * @bufp: the beginning of the buffer is stored here * * Returns: the number of bytes available in the buffer, or zero if * there's no space. */ static inline size_t seq_buf_get_buf(struct seq_buf *s, char **bufp) { WARN_ON(s->len > s->size + 1); if (s->len < s->size) { *bufp = s->buffer + s->len; return s->size - s->len; } *bufp = NULL; return 0; } /** * seq_buf_commit - commit data to the buffer * @s: the seq_buf handle * @num: the number of bytes to commit * * Commit @num bytes of data written to a buffer previously acquired * by seq_buf_get_buf(). To signal an error condition, or that the data * didn't fit in the available space, pass a negative @num value. */ static inline void seq_buf_commit(struct seq_buf *s, int num) { if (num < 0) { seq_buf_set_overflow(s); } else { /* num must be negative on overflow */ BUG_ON(s->len + num > s->size); s->len += num; } } extern __printf(2, 3) int seq_buf_printf(struct seq_buf *s, const char *fmt, ...); extern __printf(2, 0) int seq_buf_vprintf(struct seq_buf *s, const char *fmt, va_list args); extern int seq_buf_print_seq(struct seq_file *m, struct seq_buf *s); extern int seq_buf_to_user(struct seq_buf *s, char __user *ubuf, size_t start, int cnt); extern int seq_buf_puts(struct seq_buf *s, const char *str); extern int seq_buf_putc(struct seq_buf *s, unsigned char c); extern int seq_buf_putmem(struct seq_buf *s, const void *mem, unsigned int len); extern int seq_buf_putmem_hex(struct seq_buf *s, const void *mem, unsigned int len); extern int seq_buf_path(struct seq_buf *s, const struct path *path, const char *esc); extern int seq_buf_hex_dump(struct seq_buf *s, const char *prefix_str, int prefix_type, int rowsize, int groupsize, const void *buf, size_t len, bool ascii); #ifdef CONFIG_BINARY_PRINTF __printf(2, 0) int seq_buf_bprintf(struct seq_buf *s, const char *fmt, const u32 *binary); #endif void seq_buf_do_printk(struct seq_buf *s, const char *lvl); #endif /* _LINUX_SEQ_BUF_H */ |
2081 2079 2023 2024 29 29 29 2025 2025 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 | // SPDX-License-Identifier: GPL-2.0 /* * of.c The helpers for hcd device tree support * * Copyright (C) 2016 Freescale Semiconductor, Inc. * Author: Peter Chen <peter.chen@freescale.com> * Copyright (C) 2017 Johan Hovold <johan@kernel.org> */ #include <linux/of.h> #include <linux/of_graph.h> #include <linux/usb/of.h> /** * usb_of_get_device_node() - get a USB device node * @hub: hub to which device is connected * @port1: one-based index of port * * Look up the node of a USB device given its parent hub device and one-based * port number. * * Return: A pointer to the node with incremented refcount if found, or * %NULL otherwise. */ struct device_node *usb_of_get_device_node(struct usb_device *hub, int port1) { struct device_node *node; u32 reg; for_each_child_of_node(hub->dev.of_node, node) { if (of_property_read_u32(node, "reg", ®)) continue; if (reg == port1) return node; } return NULL; } EXPORT_SYMBOL_GPL(usb_of_get_device_node); /** * usb_of_has_combined_node() - determine whether a device has a combined node * @udev: USB device * * Determine whether a USB device has a so called combined node which is * shared with its sole interface. This is the case if and only if the device * has a node and its descriptors report the following: * * 1) bDeviceClass is 0 or 9, and * 2) bNumConfigurations is 1, and * 3) bNumInterfaces is 1. * * Return: True iff the device has a device node and its descriptors match the * criteria for a combined node. */ bool usb_of_has_combined_node(struct usb_device *udev) { struct usb_device_descriptor *ddesc = &udev->descriptor; struct usb_config_descriptor *cdesc; if (!udev->dev.of_node) return false; switch (ddesc->bDeviceClass) { case USB_CLASS_PER_INTERFACE: case USB_CLASS_HUB: if (ddesc->bNumConfigurations == 1) { cdesc = &udev->config->desc; if (cdesc->bNumInterfaces == 1) return true; } } return false; } EXPORT_SYMBOL_GPL(usb_of_has_combined_node); static bool usb_of_has_devices_or_graph(const struct usb_device *hub) { const struct device_node *np = hub->dev.of_node; struct device_node *child; if (of_graph_is_present(np)) return true; for_each_child_of_node(np, child) { if (of_property_present(child, "reg")) { of_node_put(child); return true; } } return false; } /** * usb_of_get_connect_type() - get a USB hub's port connect_type * @hub: hub to which port is for @port1 * @port1: one-based index of port * * Get the connect_type of @port1 based on the device node for @hub. If the * port is described in the OF graph, the connect_type is "hotplug". If the * @hub has a child device has with a 'reg' property equal to @port1 the * connect_type is "hard-wired". If there isn't an OF graph or child node at * all then the connect_type is "unknown". Otherwise, the port is considered * "unused" because it isn't described at all. * * Return: A connect_type for @port1 based on the device node for @hub. */ enum usb_port_connect_type usb_of_get_connect_type(struct usb_device *hub, int port1) { struct device_node *np, *child, *ep, *remote_np; enum usb_port_connect_type connect_type; /* Only set connect_type if binding has ports/hardwired devices. */ if (!usb_of_has_devices_or_graph(hub)) return USB_PORT_CONNECT_TYPE_UNKNOWN; /* Assume port is unused if there's a graph or a child node. */ connect_type = USB_PORT_NOT_USED; np = hub->dev.of_node; /* * Hotplug ports are connected to an available remote node, e.g. * usb-a-connector compatible node, in the OF graph. */ if (of_graph_is_present(np)) { ep = of_graph_get_endpoint_by_regs(np, port1, -1); if (ep) { remote_np = of_graph_get_remote_port_parent(ep); of_node_put(ep); if (of_device_is_available(remote_np)) connect_type = USB_PORT_CONNECT_TYPE_HOT_PLUG; of_node_put(remote_np); } } /* * Hard-wired ports are child nodes with a reg property corresponding * to the port number, i.e. a usb device. */ child = usb_of_get_device_node(hub, port1); if (of_device_is_available(child)) connect_type = USB_PORT_CONNECT_TYPE_HARD_WIRED; of_node_put(child); return connect_type; } EXPORT_SYMBOL_GPL(usb_of_get_connect_type); /** * usb_of_get_interface_node() - get a USB interface node * @udev: USB device of interface * @config: configuration value * @ifnum: interface number * * Look up the node of a USB interface given its USB device, configuration * value and interface number. * * Return: A pointer to the node with incremented refcount if found, or * %NULL otherwise. */ struct device_node * usb_of_get_interface_node(struct usb_device *udev, u8 config, u8 ifnum) { struct device_node *node; u32 reg[2]; for_each_child_of_node(udev->dev.of_node, node) { if (of_property_read_u32_array(node, "reg", reg, 2)) continue; if (reg[0] == ifnum && reg[1] == config) return node; } return NULL; } EXPORT_SYMBOL_GPL(usb_of_get_interface_node); |
2 2382 27611 3844 14127 9 3013 3192 5745 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 | /* SPDX-License-Identifier: GPL-2.0 */ /* thread_info.h: common low-level thread information accessors * * Copyright (C) 2002 David Howells (dhowells@redhat.com) * - Incorporating suggestions made by Linus Torvalds */ #ifndef _LINUX_THREAD_INFO_H #define _LINUX_THREAD_INFO_H #include <linux/types.h> #include <linux/limits.h> #include <linux/bug.h> #include <linux/restart_block.h> #include <linux/errno.h> #ifdef CONFIG_THREAD_INFO_IN_TASK /* * For CONFIG_THREAD_INFO_IN_TASK kernels we need <asm/current.h> for the * definition of current, but for !CONFIG_THREAD_INFO_IN_TASK kernels, * including <asm/current.h> can cause a circular dependency on some platforms. */ #include <asm/current.h> #define current_thread_info() ((struct thread_info *)current) #endif #include <linux/bitops.h> /* * For per-arch arch_within_stack_frames() implementations, defined in * asm/thread_info.h. */ enum { BAD_STACK = -1, NOT_STACK = 0, GOOD_FRAME, GOOD_STACK, }; #ifdef CONFIG_GENERIC_ENTRY enum syscall_work_bit { SYSCALL_WORK_BIT_SECCOMP, SYSCALL_WORK_BIT_SYSCALL_TRACEPOINT, SYSCALL_WORK_BIT_SYSCALL_TRACE, SYSCALL_WORK_BIT_SYSCALL_EMU, SYSCALL_WORK_BIT_SYSCALL_AUDIT, SYSCALL_WORK_BIT_SYSCALL_USER_DISPATCH, SYSCALL_WORK_BIT_SYSCALL_EXIT_TRAP, }; #define SYSCALL_WORK_SECCOMP BIT(SYSCALL_WORK_BIT_SECCOMP) #define SYSCALL_WORK_SYSCALL_TRACEPOINT BIT(SYSCALL_WORK_BIT_SYSCALL_TRACEPOINT) #define SYSCALL_WORK_SYSCALL_TRACE BIT(SYSCALL_WORK_BIT_SYSCALL_TRACE) #define SYSCALL_WORK_SYSCALL_EMU BIT(SYSCALL_WORK_BIT_SYSCALL_EMU) #define SYSCALL_WORK_SYSCALL_AUDIT BIT(SYSCALL_WORK_BIT_SYSCALL_AUDIT) #define SYSCALL_WORK_SYSCALL_USER_DISPATCH BIT(SYSCALL_WORK_BIT_SYSCALL_USER_DISPATCH) #define SYSCALL_WORK_SYSCALL_EXIT_TRAP BIT(SYSCALL_WORK_BIT_SYSCALL_EXIT_TRAP) #endif #include <asm/thread_info.h> #ifndef TIF_NEED_RESCHED_LAZY #ifdef CONFIG_ARCH_HAS_PREEMPT_LAZY #error Inconsistent PREEMPT_LAZY #endif #define TIF_NEED_RESCHED_LAZY TIF_NEED_RESCHED #define _TIF_NEED_RESCHED_LAZY _TIF_NEED_RESCHED #endif #ifdef __KERNEL__ #ifndef arch_set_restart_data #define arch_set_restart_data(restart) do { } while (0) #endif static inline long set_restart_fn(struct restart_block *restart, long (*fn)(struct restart_block *)) { restart->fn = fn; arch_set_restart_data(restart); return -ERESTART_RESTARTBLOCK; } #ifndef THREAD_ALIGN #define THREAD_ALIGN THREAD_SIZE #endif #define THREADINFO_GFP (GFP_KERNEL_ACCOUNT | __GFP_ZERO) /* * flag set/clear/test wrappers * - pass TIF_xxxx constants to these functions */ static inline void set_ti_thread_flag(struct thread_info *ti, int flag) { set_bit(flag, (unsigned long *)&ti->flags); } static inline void clear_ti_thread_flag(struct thread_info *ti, int flag) { clear_bit(flag, (unsigned long *)&ti->flags); } static inline void update_ti_thread_flag(struct thread_info *ti, int flag, bool value) { if (value) set_ti_thread_flag(ti, flag); else clear_ti_thread_flag(ti, flag); } static inline int test_and_set_ti_thread_flag(struct thread_info *ti, int flag) { return test_and_set_bit(flag, (unsigned long *)&ti->flags); } static inline int test_and_clear_ti_thread_flag(struct thread_info *ti, int flag) { return test_and_clear_bit(flag, (unsigned long *)&ti->flags); } static inline int test_ti_thread_flag(struct thread_info *ti, int flag) { return test_bit(flag, (unsigned long *)&ti->flags); } /* * This may be used in noinstr code, and needs to be __always_inline to prevent * inadvertent instrumentation. */ static __always_inline unsigned long read_ti_thread_flags(struct thread_info *ti) { return READ_ONCE(ti->flags); } #define set_thread_flag(flag) \ set_ti_thread_flag(current_thread_info(), flag) #define clear_thread_flag(flag) \ clear_ti_thread_flag(current_thread_info(), flag) #define update_thread_flag(flag, value) \ update_ti_thread_flag(current_thread_info(), flag, value) #define test_and_set_thread_flag(flag) \ test_and_set_ti_thread_flag(current_thread_info(), flag) #define test_and_clear_thread_flag(flag) \ test_and_clear_ti_thread_flag(current_thread_info(), flag) #define test_thread_flag(flag) \ test_ti_thread_flag(current_thread_info(), flag) #define read_thread_flags() \ read_ti_thread_flags(current_thread_info()) #define read_task_thread_flags(t) \ read_ti_thread_flags(task_thread_info(t)) #ifdef CONFIG_GENERIC_ENTRY #define set_syscall_work(fl) \ set_bit(SYSCALL_WORK_BIT_##fl, ¤t_thread_info()->syscall_work) #define test_syscall_work(fl) \ test_bit(SYSCALL_WORK_BIT_##fl, ¤t_thread_info()->syscall_work) #define clear_syscall_work(fl) \ clear_bit(SYSCALL_WORK_BIT_##fl, ¤t_thread_info()->syscall_work) #define set_task_syscall_work(t, fl) \ set_bit(SYSCALL_WORK_BIT_##fl, &task_thread_info(t)->syscall_work) #define test_task_syscall_work(t, fl) \ test_bit(SYSCALL_WORK_BIT_##fl, &task_thread_info(t)->syscall_work) #define clear_task_syscall_work(t, fl) \ clear_bit(SYSCALL_WORK_BIT_##fl, &task_thread_info(t)->syscall_work) #else /* CONFIG_GENERIC_ENTRY */ #define set_syscall_work(fl) \ set_ti_thread_flag(current_thread_info(), TIF_##fl) #define test_syscall_work(fl) \ test_ti_thread_flag(current_thread_info(), TIF_##fl) #define clear_syscall_work(fl) \ clear_ti_thread_flag(current_thread_info(), TIF_##fl) #define set_task_syscall_work(t, fl) \ set_ti_thread_flag(task_thread_info(t), TIF_##fl) #define test_task_syscall_work(t, fl) \ test_ti_thread_flag(task_thread_info(t), TIF_##fl) #define clear_task_syscall_work(t, fl) \ clear_ti_thread_flag(task_thread_info(t), TIF_##fl) #endif /* !CONFIG_GENERIC_ENTRY */ #ifdef _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H static __always_inline bool tif_test_bit(int bit) { return arch_test_bit(bit, (unsigned long *)(¤t_thread_info()->flags)); } #else static __always_inline bool tif_test_bit(int bit) { return test_bit(bit, (unsigned long *)(¤t_thread_info()->flags)); } #endif /* _ASM_GENERIC_BITOPS_INSTRUMENTED_NON_ATOMIC_H */ static __always_inline bool tif_need_resched(void) { return tif_test_bit(TIF_NEED_RESCHED); } #ifndef CONFIG_HAVE_ARCH_WITHIN_STACK_FRAMES static inline int arch_within_stack_frames(const void * const stack, const void * const stackend, const void *obj, unsigned long len) { return 0; } #endif #ifndef arch_setup_new_exec static inline void arch_setup_new_exec(void) { } #endif void arch_task_cache_init(void); /* for CONFIG_SH */ void arch_release_task_struct(struct task_struct *tsk); int arch_dup_task_struct(struct task_struct *dst, struct task_struct *src); #endif /* __KERNEL__ */ #endif /* _LINUX_THREAD_INFO_H */ |
476 436 154 154 66 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 | /* SPDX-License-Identifier: GPL-2.0 */ #ifndef _LINUX_MEMREMAP_H_ #define _LINUX_MEMREMAP_H_ #include <linux/mmzone.h> #include <linux/range.h> #include <linux/ioport.h> #include <linux/percpu-refcount.h> struct resource; struct device; /** * struct vmem_altmap - pre-allocated storage for vmemmap_populate * @base_pfn: base of the entire dev_pagemap mapping * @reserve: pages mapped, but reserved for driver use (relative to @base) * @free: free pages set aside in the mapping for memmap storage * @align: pages reserved to meet allocation alignments * @alloc: track pages consumed, private to vmemmap_populate() */ struct vmem_altmap { unsigned long base_pfn; const unsigned long end_pfn; const unsigned long reserve; unsigned long free; unsigned long align; unsigned long alloc; bool inaccessible; }; /* * Specialize ZONE_DEVICE memory into multiple types each has a different * usage. * * MEMORY_DEVICE_PRIVATE: * Device memory that is not directly addressable by the CPU: CPU can neither * read nor write private memory. In this case, we do still have struct pages * backing the device memory. Doing so simplifies the implementation, but it is * important to remember that there are certain points at which the struct page * must be treated as an opaque object, rather than a "normal" struct page. * * A more complete discussion of unaddressable memory may be found in * include/linux/hmm.h and Documentation/mm/hmm.rst. * * MEMORY_DEVICE_COHERENT: * Device memory that is cache coherent from device and CPU point of view. This * is used on platforms that have an advanced system bus (like CAPI or CXL). A * driver can hotplug the device memory using ZONE_DEVICE and with that memory * type. Any page of a process can be migrated to such memory. However no one * should be allowed to pin such memory so that it can always be evicted. * * MEMORY_DEVICE_FS_DAX: * Host memory that has similar access semantics as System RAM i.e. DMA * coherent and supports page pinning. In support of coordinating page * pinning vs other operations MEMORY_DEVICE_FS_DAX arranges for a * wakeup event whenever a page is unpinned and becomes idle. This * wakeup is used to coordinate physical address space management (ex: * fs truncate/hole punch) vs pinned pages (ex: device dma). * * MEMORY_DEVICE_GENERIC: * Host memory that has similar access semantics as System RAM i.e. DMA * coherent and supports page pinning. This is for example used by DAX devices * that expose memory using a character device. * * MEMORY_DEVICE_PCI_P2PDMA: * Device memory residing in a PCI BAR intended for use with Peer-to-Peer * transactions. */ enum memory_type { /* 0 is reserved to catch uninitialized type fields */ MEMORY_DEVICE_PRIVATE = 1, MEMORY_DEVICE_COHERENT, MEMORY_DEVICE_FS_DAX, MEMORY_DEVICE_GENERIC, MEMORY_DEVICE_PCI_P2PDMA, }; struct dev_pagemap_ops { /* * Called once the page refcount reaches 0. The reference count will be * reset to one by the core code after the method is called to prepare * for handing out the page again. */ void (*page_free)(struct page *page); /* * Used for private (un-addressable) device memory only. Must migrate * the page back to a CPU accessible page. */ vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf); /* * Handle the memory failure happens on a range of pfns. Notify the * processes who are using these pfns, and try to recover the data on * them if necessary. The mf_flags is finally passed to the recover * function through the whole notify routine. * * When this is not implemented, or it returns -EOPNOTSUPP, the caller * will fall back to a common handler called mf_generic_kill_procs(). */ int (*memory_failure)(struct dev_pagemap *pgmap, unsigned long pfn, unsigned long nr_pages, int mf_flags); }; #define PGMAP_ALTMAP_VALID (1 << 0) /** * struct dev_pagemap - metadata for ZONE_DEVICE mappings * @altmap: pre-allocated/reserved memory for vmemmap allocations * @ref: reference count that pins the devm_memremap_pages() mapping * @done: completion for @ref * @type: memory type: see MEMORY_* above in memremap.h * @flags: PGMAP_* flags to specify defailed behavior * @vmemmap_shift: structural definition of how the vmemmap page metadata * is populated, specifically the metadata page order. * A zero value (default) uses base pages as the vmemmap metadata * representation. A bigger value will set up compound struct pages * of the requested order value. * @ops: method table * @owner: an opaque pointer identifying the entity that manages this * instance. Used by various helpers to make sure that no * foreign ZONE_DEVICE memory is accessed. * @nr_range: number of ranges to be mapped * @range: range to be mapped when nr_range == 1 * @ranges: array of ranges to be mapped when nr_range > 1 */ struct dev_pagemap { struct vmem_altmap altmap; struct percpu_ref ref; struct completion done; enum memory_type type; unsigned int flags; unsigned long vmemmap_shift; const struct dev_pagemap_ops *ops; void *owner; int nr_range; union { struct range range; DECLARE_FLEX_ARRAY(struct range, ranges); }; }; static inline bool pgmap_has_memory_failure(struct dev_pagemap *pgmap) { return pgmap->ops && pgmap->ops->memory_failure; } static inline struct vmem_altmap *pgmap_altmap(struct dev_pagemap *pgmap) { if (pgmap->flags & PGMAP_ALTMAP_VALID) return &pgmap->altmap; return NULL; } static inline unsigned long pgmap_vmemmap_nr(struct dev_pagemap *pgmap) { return 1 << pgmap->vmemmap_shift; } static inline bool is_device_private_page(const struct page *page) { return IS_ENABLED(CONFIG_DEVICE_PRIVATE) && is_zone_device_page(page) && page_pgmap(page)->type == MEMORY_DEVICE_PRIVATE; } static inline bool folio_is_device_private(const struct folio *folio) { return is_device_private_page(&folio->page); } static inline bool is_pci_p2pdma_page(const struct page *page) { return IS_ENABLED(CONFIG_PCI_P2PDMA) && is_zone_device_page(page) && page_pgmap(page)->type == MEMORY_DEVICE_PCI_P2PDMA; } static inline bool is_device_coherent_page(const struct page *page) { return is_zone_device_page(page) && page_pgmap(page)->type == MEMORY_DEVICE_COHERENT; } static inline bool folio_is_device_coherent(const struct folio *folio) { return is_device_coherent_page(&folio->page); } static inline bool is_fsdax_page(const struct page *page) { return is_zone_device_page(page) && page_pgmap(page)->type == MEMORY_DEVICE_FS_DAX; } static inline bool folio_is_fsdax(const struct folio *folio) { return is_fsdax_page(&folio->page); } #ifdef CONFIG_ZONE_DEVICE void zone_device_page_init(struct page *page); void *memremap_pages(struct dev_pagemap *pgmap, int nid); void memunmap_pages(struct dev_pagemap *pgmap); void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap); struct dev_pagemap *get_dev_pagemap(unsigned long pfn, struct dev_pagemap *pgmap); bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn); unsigned long memremap_compat_align(void); #else static inline void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) { /* * Fail attempts to call devm_memremap_pages() without * ZONE_DEVICE support enabled, this requires callers to fall * back to plain devm_memremap() based on config */ WARN_ON_ONCE(1); return ERR_PTR(-ENXIO); } static inline void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap) { } static inline struct dev_pagemap *get_dev_pagemap(unsigned long pfn, struct dev_pagemap *pgmap) { return NULL; } static inline bool pgmap_pfn_valid(struct dev_pagemap *pgmap, unsigned long pfn) { return false; } /* when memremap_pages() is disabled all archs can remap a single page */ static inline unsigned long memremap_compat_align(void) { return PAGE_SIZE; } #endif /* CONFIG_ZONE_DEVICE */ static inline void put_dev_pagemap(struct dev_pagemap *pgmap) { if (pgmap) percpu_ref_put(&pgmap->ref); } #endif /* _LINUX_MEMREMAP_H_ */ |
26 4 4 6 6 9 9 9 2 2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 | // SPDX-License-Identifier: GPL-2.0-or-later /* * * Copyright (C) Alan Cox GW4PTS (alan@lxorguk.ukuu.org.uk) * Copyright (C) Jonathan Naylor G4KLX (g4klx@g4klx.demon.co.uk) * Copyright (C) Tomi Manninen OH2BNS (oh2bns@sral.fi) * Copyright (C) Darryl Miles G7LED (dlm@g7led.demon.co.uk) * Copyright (C) Joerg Reuter DL1BKE (jreuter@yaina.de) * Copyright (C) Frederic Rible F1OAT (frible@teaser.fr) * Copyright (C) 2002 Ralf Baechle DO1GRB (ralf@gnu.org) */ #include <linux/errno.h> #include <linux/types.h> #include <linux/socket.h> #include <linux/in.h> #include <linux/kernel.h> #include <linux/module.h> #include <linux/jiffies.h> #include <linux/timer.h> #include <linux/string.h> #include <linux/sockios.h> #include <linux/net.h> #include <net/ax25.h> #include <linux/inet.h> #include <linux/netdevice.h> #include <linux/skbuff.h> #include <net/sock.h> #include <linux/uaccess.h> #include <linux/fcntl.h> #include <linux/mm.h> #include <linux/interrupt.h> static void ax25_heartbeat_expiry(struct timer_list *); static void ax25_t1timer_expiry(struct timer_list *); static void ax25_t2timer_expiry(struct timer_list *); static void ax25_t3timer_expiry(struct timer_list *); static void ax25_idletimer_expiry(struct timer_list *); void ax25_setup_timers(ax25_cb *ax25) { timer_setup(&ax25->timer, ax25_heartbeat_expiry, 0); timer_setup(&ax25->t1timer, ax25_t1timer_expiry, 0); timer_setup(&ax25->t2timer, ax25_t2timer_expiry, 0); timer_setup(&ax25->t3timer, ax25_t3timer_expiry, 0); timer_setup(&ax25->idletimer, ax25_idletimer_expiry, 0); } void ax25_start_heartbeat(ax25_cb *ax25) { mod_timer(&ax25->timer, jiffies + 5 * HZ); } void ax25_start_t1timer(ax25_cb *ax25) { mod_timer(&ax25->t1timer, jiffies + ax25->t1); } void ax25_start_t2timer(ax25_cb *ax25) { mod_timer(&ax25->t2timer, jiffies + ax25->t2); } void ax25_start_t3timer(ax25_cb *ax25) { if (ax25->t3 > 0) mod_timer(&ax25->t3timer, jiffies + ax25->t3); else timer_delete(&ax25->t3timer); } void ax25_start_idletimer(ax25_cb *ax25) { if (ax25->idle > 0) mod_timer(&ax25->idletimer, jiffies + ax25->idle); else timer_delete(&ax25->idletimer); } void ax25_stop_heartbeat(ax25_cb *ax25) { timer_delete(&ax25->timer); } void ax25_stop_t1timer(ax25_cb *ax25) { timer_delete(&ax25->t1timer); } void ax25_stop_t2timer(ax25_cb *ax25) { timer_delete(&ax25->t2timer); } void ax25_stop_t3timer(ax25_cb *ax25) { timer_delete(&ax25->t3timer); } void ax25_stop_idletimer(ax25_cb *ax25) { timer_delete(&ax25->idletimer); } int ax25_t1timer_running(ax25_cb *ax25) { return timer_pending(&ax25->t1timer); } unsigned long ax25_display_timer(struct timer_list *timer) { long delta = timer->expires - jiffies; if (!timer_pending(timer)) return 0; return max(0L, delta); } EXPORT_SYMBOL(ax25_display_timer); static void ax25_heartbeat_expiry(struct timer_list *t) { int proto = AX25_PROTO_STD_SIMPLEX; ax25_cb *ax25 = from_timer(ax25, t, timer); if (ax25->ax25_dev) proto = ax25->ax25_dev->values[AX25_VALUES_PROTOCOL]; switch (proto) { case AX25_PROTO_STD_SIMPLEX: case AX25_PROTO_STD_DUPLEX: ax25_std_heartbeat_expiry(ax25); break; #ifdef CONFIG_AX25_DAMA_SLAVE case AX25_PROTO_DAMA_SLAVE: if (ax25->ax25_dev->dama.slave) ax25_ds_heartbeat_expiry(ax25); else ax25_std_heartbeat_expiry(ax25); break; #endif } } static void ax25_t1timer_expiry(struct timer_list *t) { ax25_cb *ax25 = from_timer(ax25, t, t1timer); switch (ax25->ax25_dev->values[AX25_VALUES_PROTOCOL]) { case AX25_PROTO_STD_SIMPLEX: case AX25_PROTO_STD_DUPLEX: ax25_std_t1timer_expiry(ax25); break; #ifdef CONFIG_AX25_DAMA_SLAVE case AX25_PROTO_DAMA_SLAVE: if (!ax25->ax25_dev->dama.slave) ax25_std_t1timer_expiry(ax25); break; #endif } } static void ax25_t2timer_expiry(struct timer_list *t) { ax25_cb *ax25 = from_timer(ax25, t, t2timer); switch (ax25->ax25_dev->values[AX25_VALUES_PROTOCOL]) { case AX25_PROTO_STD_SIMPLEX: case AX25_PROTO_STD_DUPLEX: ax25_std_t2timer_expiry(ax25); break; #ifdef CONFIG_AX25_DAMA_SLAVE case AX25_PROTO_DAMA_SLAVE: if (!ax25->ax25_dev->dama.slave) ax25_std_t2timer_expiry(ax25); break; #endif } } static void ax25_t3timer_expiry(struct timer_list *t) { ax25_cb *ax25 = from_timer(ax25, t, t3timer); switch (ax25->ax25_dev->values[AX25_VALUES_PROTOCOL]) { case AX25_PROTO_STD_SIMPLEX: case AX25_PROTO_STD_DUPLEX: ax25_std_t3timer_expiry(ax25); break; #ifdef CONFIG_AX25_DAMA_SLAVE case AX25_PROTO_DAMA_SLAVE: if (ax25->ax25_dev->dama.slave) ax25_ds_t3timer_expiry(ax25); else ax25_std_t3timer_expiry(ax25); break; #endif } } static void ax25_idletimer_expiry(struct timer_list *t) { ax25_cb *ax25 = from_timer(ax25, t, idletimer); switch (ax25->ax25_dev->values[AX25_VALUES_PROTOCOL]) { case AX25_PROTO_STD_SIMPLEX: case AX25_PROTO_STD_DUPLEX: ax25_std_idletimer_expiry(ax25); break; #ifdef CONFIG_AX25_DAMA_SLAVE case AX25_PROTO_DAMA_SLAVE: if (ax25->ax25_dev->dama.slave) ax25_ds_idletimer_expiry(ax25); else ax25_std_idletimer_expiry(ax25); break; #endif } } |
104 103 105 3 105 105 226 97 97 97 97 97 97 97 94 93 93 15 15 15 15 15 15 15 15 15 5 4 1 4 4 16 15 9 2 9 15 15 15 91 15 15 91 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 | // SPDX-License-Identifier: GPL-2.0-or-later /* SCTP kernel implementation * (C) Copyright IBM Corp. 2001, 2004 * Copyright (c) 1999-2000 Cisco, Inc. * Copyright (c) 1999-2001 Motorola, Inc. * Copyright (c) 2001 Intel Corp. * * This file is part of the SCTP kernel implementation * * These functions manipulate sctp tsn mapping array. * * Please send any bug reports or fixes you make to the * email address(es): * lksctp developers <linux-sctp@vger.kernel.org> * * Written or modified by: * La Monte H.P. Yarroll <piggy@acm.org> * Jon Grimm <jgrimm@us.ibm.com> * Karl Knutson <karl@athena.chicago.il.us> * Sridhar Samudrala <sri@us.ibm.com> */ #include <linux/slab.h> #include <linux/types.h> #include <linux/bitmap.h> #include <net/sctp/sctp.h> #include <net/sctp/sm.h> static void sctp_tsnmap_update(struct sctp_tsnmap *map); static void sctp_tsnmap_find_gap_ack(unsigned long *map, __u16 off, __u16 len, __u16 *start, __u16 *end); static int sctp_tsnmap_grow(struct sctp_tsnmap *map, u16 size); /* Initialize a block of memory as a tsnmap. */ struct sctp_tsnmap *sctp_tsnmap_init(struct sctp_tsnmap *map, __u16 len, __u32 initial_tsn, gfp_t gfp) { if (!map->tsn_map) { map->tsn_map = kzalloc(len>>3, gfp); if (map->tsn_map == NULL) return NULL; map->len = len; } else { bitmap_zero(map->tsn_map, map->len); } /* Keep track of TSNs represented by tsn_map. */ map->base_tsn = initial_tsn; map->cumulative_tsn_ack_point = initial_tsn - 1; map->max_tsn_seen = map->cumulative_tsn_ack_point; map->num_dup_tsns = 0; return map; } void sctp_tsnmap_free(struct sctp_tsnmap *map) { map->len = 0; kfree(map->tsn_map); } /* Test the tracking state of this TSN. * Returns: * 0 if the TSN has not yet been seen * >0 if the TSN has been seen (duplicate) * <0 if the TSN is invalid (too large to track) */ int sctp_tsnmap_check(const struct sctp_tsnmap *map, __u32 tsn) { u32 gap; /* Check to see if this is an old TSN */ if (TSN_lte(tsn, map->cumulative_tsn_ack_point)) return 1; /* Verify that we can hold this TSN and that it will not * overflow our map */ if (!TSN_lt(tsn, map->base_tsn + SCTP_TSN_MAP_SIZE)) return -1; /* Calculate the index into the mapping arrays. */ gap = tsn - map->base_tsn; /* Check to see if TSN has already been recorded. */ if (gap < map->len && test_bit(gap, map->tsn_map)) return 1; else return 0; } /* Mark this TSN as seen. */ int sctp_tsnmap_mark(struct sctp_tsnmap *map, __u32 tsn, struct sctp_transport *trans) { u16 gap; if (TSN_lt(tsn, map->base_tsn)) return 0; gap = tsn - map->base_tsn; if (gap >= map->len && !sctp_tsnmap_grow(map, gap + 1)) return -ENOMEM; if (!sctp_tsnmap_has_gap(map) && gap == 0) { /* In this case the map has no gaps and the tsn we are * recording is the next expected tsn. We don't touch * the map but simply bump the values. */ map->max_tsn_seen++; map->cumulative_tsn_ack_point++; if (trans) trans->sack_generation = trans->asoc->peer.sack_generation; map->base_tsn++; } else { /* Either we already have a gap, or about to record a gap, so * have work to do. * * Bump the max. */ if (TSN_lt(map->max_tsn_seen, tsn)) map->max_tsn_seen = tsn; /* Mark the TSN as received. */ set_bit(gap, map->tsn_map); /* Go fixup any internal TSN mapping variables including * cumulative_tsn_ack_point. */ sctp_tsnmap_update(map); } return 0; } /* Initialize a Gap Ack Block iterator from memory being provided. */ static void sctp_tsnmap_iter_init(const struct sctp_tsnmap *map, struct sctp_tsnmap_iter *iter) { /* Only start looking one past the Cumulative TSN Ack Point. */ iter->start = map->cumulative_tsn_ack_point + 1; } /* Get the next Gap Ack Blocks. Returns 0 if there was not another block * to get. */ static int sctp_tsnmap_next_gap_ack(const struct sctp_tsnmap *map, struct sctp_tsnmap_iter *iter, __u16 *start, __u16 *end) { int ended = 0; __u16 start_ = 0, end_ = 0, offset; /* If there are no more gap acks possible, get out fast. */ if (TSN_lte(map->max_tsn_seen, iter->start)) return 0; offset = iter->start - map->base_tsn; sctp_tsnmap_find_gap_ack(map->tsn_map, offset, map->len, &start_, &end_); /* The Gap Ack Block happens to end at the end of the map. */ if (start_ && !end_) end_ = map->len - 1; /* If we found a Gap Ack Block, return the start and end and * bump the iterator forward. */ if (end_) { /* Fix up the start and end based on the * Cumulative TSN Ack which is always 1 behind base. */ *start = start_ + 1; *end = end_ + 1; /* Move the iterator forward. */ iter->start = map->cumulative_tsn_ack_point + *end + 1; ended = 1; } return ended; } /* Mark this and any lower TSN as seen. */ void sctp_tsnmap_skip(struct sctp_tsnmap *map, __u32 tsn) { u32 gap; if (TSN_lt(tsn, map->base_tsn)) return; if (!TSN_lt(tsn, map->base_tsn + SCTP_TSN_MAP_SIZE)) return; /* Bump the max. */ if (TSN_lt(map->max_tsn_seen, tsn)) map->max_tsn_seen = tsn; gap = tsn - map->base_tsn + 1; map->base_tsn += gap; map->cumulative_tsn_ack_point += gap; if (gap >= map->len) { /* If our gap is larger then the map size, just * zero out the map. */ bitmap_zero(map->tsn_map, map->len); } else { /* If the gap is smaller than the map size, * shift the map by 'gap' bits and update further. */ bitmap_shift_right(map->tsn_map, map->tsn_map, gap, map->len); sctp_tsnmap_update(map); } } /******************************************************************** * 2nd Level Abstractions ********************************************************************/ /* This private helper function updates the tsnmap buffers and * the Cumulative TSN Ack Point. */ static void sctp_tsnmap_update(struct sctp_tsnmap *map) { u16 len; unsigned long zero_bit; len = map->max_tsn_seen - map->cumulative_tsn_ack_point; zero_bit = find_first_zero_bit(map->tsn_map, len); if (!zero_bit) return; /* The first 0-bit is bit 0. nothing to do */ map->base_tsn += zero_bit; map->cumulative_tsn_ack_point += zero_bit; bitmap_shift_right(map->tsn_map, map->tsn_map, zero_bit, map->len); } /* How many data chunks are we missing from our peer? */ __u16 sctp_tsnmap_pending(struct sctp_tsnmap *map) { __u32 cum_tsn = map->cumulative_tsn_ack_point; __u32 max_tsn = map->max_tsn_seen; __u32 base_tsn = map->base_tsn; __u16 pending_data; u32 gap; pending_data = max_tsn - cum_tsn; gap = max_tsn - base_tsn; if (gap == 0 || gap >= map->len) goto out; pending_data -= bitmap_weight(map->tsn_map, gap + 1); out: return pending_data; } /* This is a private helper for finding Gap Ack Blocks. It searches a * single array for the start and end of a Gap Ack Block. * * The flags "started" and "ended" tell is if we found the beginning * or (respectively) the end of a Gap Ack Block. */ static void sctp_tsnmap_find_gap_ack(unsigned long *map, __u16 off, __u16 len, __u16 *start, __u16 *end) { int i = off; /* Look through the entire array, but break out * early if we have found the end of the Gap Ack Block. */ /* Also, stop looking past the maximum TSN seen. */ /* Look for the start. */ i = find_next_bit(map, len, off); if (i < len) *start = i; /* Look for the end. */ if (*start) { /* We have found the start, let's find the * end. If we find the end, break out. */ i = find_next_zero_bit(map, len, i); if (i < len) *end = i - 1; } } /* Renege that we have seen a TSN. */ void sctp_tsnmap_renege(struct sctp_tsnmap *map, __u32 tsn) { u32 gap; if (TSN_lt(tsn, map->base_tsn)) return; /* Assert: TSN is in range. */ if (!TSN_lt(tsn, map->base_tsn + map->len)) return; gap = tsn - map->base_tsn; /* Pretend we never saw the TSN. */ clear_bit(gap, map->tsn_map); } /* How many gap ack blocks do we have recorded? */ __u16 sctp_tsnmap_num_gabs(struct sctp_tsnmap *map, struct sctp_gap_ack_block *gabs) { struct sctp_tsnmap_iter iter; int ngaps = 0; /* Refresh the gap ack information. */ if (sctp_tsnmap_has_gap(map)) { __u16 start = 0, end = 0; sctp_tsnmap_iter_init(map, &iter); while (sctp_tsnmap_next_gap_ack(map, &iter, &start, &end)) { gabs[ngaps].start = htons(start); gabs[ngaps].end = htons(end); ngaps++; if (ngaps >= SCTP_MAX_GABS) break; } } return ngaps; } static int sctp_tsnmap_grow(struct sctp_tsnmap *map, u16 size) { unsigned long *new; unsigned long inc; u16 len; if (size > SCTP_TSN_MAP_SIZE) return 0; inc = ALIGN((size - map->len), BITS_PER_LONG) + SCTP_TSN_MAP_INCREMENT; len = min_t(u16, map->len + inc, SCTP_TSN_MAP_SIZE); new = kzalloc(len>>3, GFP_ATOMIC); if (!new) return 0; bitmap_copy(new, map->tsn_map, map->max_tsn_seen - map->cumulative_tsn_ack_point); kfree(map->tsn_map); map->tsn_map = new; map->len = len; return 1; } |
4160 18 7 7 2066 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 | /* SPDX-License-Identifier: GPL-2.0-or-later */ /* * pm_wakeup.h - Power management wakeup interface * * Copyright (C) 2008 Alan Stern * Copyright (C) 2010 Rafael J. Wysocki, Novell Inc. */ #ifndef _LINUX_PM_WAKEUP_H #define _LINUX_PM_WAKEUP_H #ifndef _DEVICE_H_ # error "Please do not include this file directly." #endif #include <linux/types.h> struct wake_irq; /** * struct wakeup_source - Representation of wakeup sources * * @name: Name of the wakeup source * @id: Wakeup source id * @entry: Wakeup source list entry * @lock: Wakeup source lock * @wakeirq: Optional device specific wakeirq * @timer: Wakeup timer list * @timer_expires: Wakeup timer expiration * @total_time: Total time this wakeup source has been active. * @max_time: Maximum time this wakeup source has been continuously active. * @last_time: Monotonic clock when the wakeup source's was touched last time. * @prevent_sleep_time: Total time this source has been preventing autosleep. * @event_count: Number of signaled wakeup events. * @active_count: Number of times the wakeup source was activated. * @relax_count: Number of times the wakeup source was deactivated. * @expire_count: Number of times the wakeup source's timeout has expired. * @wakeup_count: Number of times the wakeup source might abort suspend. * @dev: Struct device for sysfs statistics about the wakeup source. * @active: Status of the wakeup source. * @autosleep_enabled: Autosleep is active, so update @prevent_sleep_time. */ struct wakeup_source { const char *name; int id; struct list_head entry; spinlock_t lock; struct wake_irq *wakeirq; struct timer_list timer; unsigned long timer_expires; ktime_t total_time; ktime_t max_time; ktime_t last_time; ktime_t start_prevent_time; ktime_t prevent_sleep_time; unsigned long event_count; unsigned long active_count; unsigned long relax_count; unsigned long expire_count; unsigned long wakeup_count; struct device *dev; bool active:1; bool autosleep_enabled:1; }; #define for_each_wakeup_source(ws) \ for ((ws) = wakeup_sources_walk_start(); \ (ws); \ (ws) = wakeup_sources_walk_next((ws))) #ifdef CONFIG_PM_SLEEP /* * Changes to device_may_wakeup take effect on the next pm state change. */ static inline bool device_can_wakeup(struct device *dev) { return dev->power.can_wakeup; } static inline bool device_may_wakeup(struct device *dev) { return dev->power.can_wakeup && !!dev->power.wakeup; } static inline bool device_wakeup_path(struct device *dev) { return dev->power.wakeup_path; } static inline void device_set_wakeup_path(struct device *dev) { dev->power.wakeup_path = true; } /* drivers/base/power/wakeup.c */ extern struct wakeup_source *wakeup_source_create(const char *name); extern void wakeup_source_destroy(struct wakeup_source *ws); extern void wakeup_source_add(struct wakeup_source *ws); extern void wakeup_source_remove(struct wakeup_source *ws); extern struct wakeup_source *wakeup_source_register(struct device *dev, const char *name); extern void wakeup_source_unregister(struct wakeup_source *ws); extern int wakeup_sources_read_lock(void); extern void wakeup_sources_read_unlock(int idx); extern struct wakeup_source *wakeup_sources_walk_start(void); extern struct wakeup_source *wakeup_sources_walk_next(struct wakeup_source *ws); extern int device_wakeup_enable(struct device *dev); extern void device_wakeup_disable(struct device *dev); extern void device_set_wakeup_capable(struct device *dev, bool capable); extern int device_set_wakeup_enable(struct device *dev, bool enable); extern void __pm_stay_awake(struct wakeup_source *ws); extern void pm_stay_awake(struct device *dev); extern void __pm_relax(struct wakeup_source *ws); extern void pm_relax(struct device *dev); extern void pm_wakeup_ws_event(struct wakeup_source *ws, unsigned int msec, bool hard); extern void pm_wakeup_dev_event(struct device *dev, unsigned int msec, bool hard); #else /* !CONFIG_PM_SLEEP */ static inline void device_set_wakeup_capable(struct device *dev, bool capable) { dev->power.can_wakeup = capable; } static inline bool device_can_wakeup(struct device *dev) { return dev->power.can_wakeup; } static inline struct wakeup_source *wakeup_source_create(const char *name) { return NULL; } static inline void wakeup_source_destroy(struct wakeup_source *ws) {} static inline void wakeup_source_add(struct wakeup_source *ws) {} static inline void wakeup_source_remove(struct wakeup_source *ws) {} static inline struct wakeup_source *wakeup_source_register(struct device *dev, const char *name) { return NULL; } static inline void wakeup_source_unregister(struct wakeup_source *ws) {} static inline int device_wakeup_enable(struct device *dev) { dev->power.should_wakeup = true; return 0; } static inline void device_wakeup_disable(struct device *dev) { dev->power.should_wakeup = false; } static inline int device_set_wakeup_enable(struct device *dev, bool enable) { dev->power.should_wakeup = enable; return 0; } static inline bool device_may_wakeup(struct device *dev) { return dev->power.can_wakeup && dev->power.should_wakeup; } static inline bool device_wakeup_path(struct device *dev) { return false; } static inline void device_set_wakeup_path(struct device *dev) {} static inline void __pm_stay_awake(struct wakeup_source *ws) {} static inline void pm_stay_awake(struct device *dev) {} static inline void __pm_relax(struct wakeup_source *ws) {} static inline void pm_relax(struct device *dev) {} static inline void pm_wakeup_ws_event(struct wakeup_source *ws, unsigned int msec, bool hard) {} static inline void pm_wakeup_dev_event(struct device *dev, unsigned int msec, bool hard) {} #endif /* !CONFIG_PM_SLEEP */ static inline bool device_awake_path(struct device *dev) { return device_wakeup_path(dev); } static inline void device_set_awake_path(struct device *dev) { device_set_wakeup_path(dev); } static inline void __pm_wakeup_event(struct wakeup_source *ws, unsigned int msec) { pm_wakeup_ws_event(ws, msec, false); } static inline void pm_wakeup_event(struct device *dev, unsigned int msec) { pm_wakeup_dev_event(dev, msec, false); } static inline void pm_wakeup_hard_event(struct device *dev) { pm_wakeup_dev_event(dev, 0, true); } /** * device_init_wakeup - Device wakeup initialization. * @dev: Device to handle. * @enable: Whether or not to enable @dev as a wakeup device. * * By default, most devices should leave wakeup disabled. The exceptions are * devices that everyone expects to be wakeup sources: keyboards, power buttons, * possibly network interfaces, etc. Also, devices that don't generate their * own wakeup requests but merely forward requests from one bus to another * (like PCI bridges) should have wakeup enabled by default. */ static inline int device_init_wakeup(struct device *dev, bool enable) { if (enable) { device_set_wakeup_capable(dev, true); return device_wakeup_enable(dev); } device_wakeup_disable(dev); device_set_wakeup_capable(dev, false); return 0; } static void device_disable_wakeup(void *dev) { device_init_wakeup(dev, false); } /** * devm_device_init_wakeup - Resource managed device wakeup initialization. * @dev: Device to handle. * * This function is the devm managed version of device_init_wakeup(dev, true). */ static inline int devm_device_init_wakeup(struct device *dev) { device_init_wakeup(dev, true); return devm_add_action_or_reset(dev, device_disable_wakeup, dev); } #endif /* _LINUX_PM_WAKEUP_H */ |
111 483 215 215 26 190 26 292 4736 10330 10388 10378 10398 9052 6503 4255 3008 1566 1012 727 8 720 63 658 298 405 374 31 30 32 246 192 10412 4 10382 247 248 10298 11262 5691 11237 11236 11262 11264 5691 5701 5692 5317 5684 4736 11233 5681 4571 10939 5710 5688 11061 11244 10225 10375 11279 1334 613 805 802 10392 11238 11279 11246 1 1 10394 4481 817 732 733 732 736 12 735 64 10274 10282 10377 10273 10407 4255 4255 9576 159 10367 10379 683 10292 10380 10393 3 4276 390 3694 3702 4895 4893 20 20 13 20 20 20 20 5603 2079 5604 5626 4999 4982 143 4987 66 4971 4984 4964 4912 143 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 | // SPDX-License-Identifier: GPL-2.0 /* * Kernel internal timers * * Copyright (C) 1991, 1992 Linus Torvalds * * 1997-01-28 Modified by Finn Arne Gangstad to make timers scale better. * * 1997-09-10 Updated NTP code according to technical memorandum Jan '96 * "A Kernel Model for Precision Timekeeping" by Dave Mills * 1998-12-24 Fixed a xtime SMP race (we need the xtime_lock rw spinlock to * serialize accesses to xtime/lost_ticks). * Copyright (C) 1998 Andrea Arcangeli * 1999-03-10 Improved NTP compatibility by Ulrich Windl * 2002-05-31 Move sys_sysinfo here and make its locking sane, Robert Love * 2000-10-05 Implemented scalable SMP per-CPU timer handling. * Copyright (C) 2000, 2001, 2002 Ingo Molnar * Designed by David S. Miller, Alexey Kuznetsov and Ingo Molnar */ #include <linux/kernel_stat.h> #include <linux/export.h> #include <linux/interrupt.h> #include <linux/percpu.h> #include <linux/init.h> #include <linux/mm.h> #include <linux/swap.h> #include <linux/pid_namespace.h> #include <linux/notifier.h> #include <linux/thread_info.h> #include <linux/time.h> #include <linux/jiffies.h> #include <linux/posix-timers.h> #include <linux/cpu.h> #include <linux/syscalls.h> #include <linux/delay.h> #include <linux/tick.h> #include <linux/kallsyms.h> #include <linux/irq_work.h> #include <linux/sched/sysctl.h> #include <linux/sched/nohz.h> #include <linux/sched/debug.h> #include <linux/slab.h> #include <linux/compat.h> #include <linux/random.h> #include <linux/sysctl.h> #include <linux/uaccess.h> #include <asm/unistd.h> #include <asm/div64.h> #include <asm/timex.h> #include <asm/io.h> #include "tick-internal.h" #include "timer_migration.h" #define CREATE_TRACE_POINTS #include <trace/events/timer.h> __visible u64 jiffies_64 __cacheline_aligned_in_smp = INITIAL_JIFFIES; EXPORT_SYMBOL(jiffies_64); /* * The timer wheel has LVL_DEPTH array levels. Each level provides an array of * LVL_SIZE buckets. Each level is driven by its own clock and therefore each * level has a different granularity. * * The level granularity is: LVL_CLK_DIV ^ level * The level clock frequency is: HZ / (LVL_CLK_DIV ^ level) * * The array level of a newly armed timer depends on the relative expiry * time. The farther the expiry time is away the higher the array level and * therefore the granularity becomes. * * Contrary to the original timer wheel implementation, which aims for 'exact' * expiry of the timers, this implementation removes the need for recascading * the timers into the lower array levels. The previous 'classic' timer wheel * implementation of the kernel already violated the 'exact' expiry by adding * slack to the expiry time to provide batched expiration. The granularity * levels provide implicit batching. * * This is an optimization of the original timer wheel implementation for the * majority of the timer wheel use cases: timeouts. The vast majority of * timeout timers (networking, disk I/O ...) are canceled before expiry. If * the timeout expires it indicates that normal operation is disturbed, so it * does not matter much whether the timeout comes with a slight delay. * * The only exception to this are networking timers with a small expiry * time. They rely on the granularity. Those fit into the first wheel level, * which has HZ granularity. * * We don't have cascading anymore. timers with a expiry time above the * capacity of the last wheel level are force expired at the maximum timeout * value of the last wheel level. From data sampling we know that the maximum * value observed is 5 days (network connection tracking), so this should not * be an issue. * * The currently chosen array constants values are a good compromise between * array size and granularity. * * This results in the following granularity and range levels: * * HZ 1000 steps * Level Offset Granularity Range * 0 0 1 ms 0 ms - 63 ms * 1 64 8 ms 64 ms - 511 ms * 2 128 64 ms 512 ms - 4095 ms (512ms - ~4s) * 3 192 512 ms 4096 ms - 32767 ms (~4s - ~32s) * 4 256 4096 ms (~4s) 32768 ms - 262143 ms (~32s - ~4m) * 5 320 32768 ms (~32s) 262144 ms - 2097151 ms (~4m - ~34m) * 6 384 262144 ms (~4m) 2097152 ms - 16777215 ms (~34m - ~4h) * 7 448 2097152 ms (~34m) 16777216 ms - 134217727 ms (~4h - ~1d) * 8 512 16777216 ms (~4h) 134217728 ms - 1073741822 ms (~1d - ~12d) * * HZ 300 * Level Offset Granularity Range * 0 0 3 ms 0 ms - 210 ms * 1 64 26 ms 213 ms - 1703 ms (213ms - ~1s) * 2 128 213 ms 1706 ms - 13650 ms (~1s - ~13s) * 3 192 1706 ms (~1s) 13653 ms - 109223 ms (~13s - ~1m) * 4 256 13653 ms (~13s) 109226 ms - 873810 ms (~1m - ~14m) * 5 320 109226 ms (~1m) 873813 ms - 6990503 ms (~14m - ~1h) * 6 384 873813 ms (~14m) 6990506 ms - 55924050 ms (~1h - ~15h) * 7 448 6990506 ms (~1h) 55924053 ms - 447392423 ms (~15h - ~5d) * 8 512 55924053 ms (~15h) 447392426 ms - 3579139406 ms (~5d - ~41d) * * HZ 250 * Level Offset Granularity Range * 0 0 4 ms 0 ms - 255 ms * 1 64 32 ms 256 ms - 2047 ms (256ms - ~2s) * 2 128 256 ms 2048 ms - 16383 ms (~2s - ~16s) * 3 192 2048 ms (~2s) 16384 ms - 131071 ms (~16s - ~2m) * 4 256 16384 ms (~16s) 131072 ms - 1048575 ms (~2m - ~17m) * 5 320 131072 ms (~2m) 1048576 ms - 8388607 ms (~17m - ~2h) * 6 384 1048576 ms (~17m) 8388608 ms - 67108863 ms (~2h - ~18h) * 7 448 8388608 ms (~2h) 67108864 ms - 536870911 ms (~18h - ~6d) * 8 512 67108864 ms (~18h) 536870912 ms - 4294967288 ms (~6d - ~49d) * * HZ 100 * Level Offset Granularity Range * 0 0 10 ms 0 ms - 630 ms * 1 64 80 ms 640 ms - 5110 ms (640ms - ~5s) * 2 128 640 ms 5120 ms - 40950 ms (~5s - ~40s) * 3 192 5120 ms (~5s) 40960 ms - 327670 ms (~40s - ~5m) * 4 256 40960 ms (~40s) 327680 ms - 2621430 ms (~5m - ~43m) * 5 320 327680 ms (~5m) 2621440 ms - 20971510 ms (~43m - ~5h) * 6 384 2621440 ms (~43m) 20971520 ms - 167772150 ms (~5h - ~1d) * 7 448 20971520 ms (~5h) 167772160 ms - 1342177270 ms (~1d - ~15d) */ /* Clock divisor for the next level */ #define LVL_CLK_SHIFT 3 #define LVL_CLK_DIV (1UL << LVL_CLK_SHIFT) #define LVL_CLK_MASK (LVL_CLK_DIV - 1) #define LVL_SHIFT(n) ((n) * LVL_CLK_SHIFT) #define LVL_GRAN(n) (1UL << LVL_SHIFT(n)) /* * The time start value for each level to select the bucket at enqueue * time. We start from the last possible delta of the previous level * so that we can later add an extra LVL_GRAN(n) to n (see calc_index()). */ #define LVL_START(n) ((LVL_SIZE - 1) << (((n) - 1) * LVL_CLK_SHIFT)) /* Size of each clock level */ #define LVL_BITS 6 #define LVL_SIZE (1UL << LVL_BITS) #define LVL_MASK (LVL_SIZE - 1) #define LVL_OFFS(n) ((n) * LVL_SIZE) /* Level depth */ #if HZ > 100 # define LVL_DEPTH 9 # else # define LVL_DEPTH 8 #endif /* The cutoff (max. capacity of the wheel) */ #define WHEEL_TIMEOUT_CUTOFF (LVL_START(LVL_DEPTH)) #define WHEEL_TIMEOUT_MAX (WHEEL_TIMEOUT_CUTOFF - LVL_GRAN(LVL_DEPTH - 1)) /* * The resulting wheel size. If NOHZ is configured we allocate two * wheels so we have a separate storage for the deferrable timers. */ #define WHEEL_SIZE (LVL_SIZE * LVL_DEPTH) #ifdef CONFIG_NO_HZ_COMMON /* * If multiple bases need to be locked, use the base ordering for lock * nesting, i.e. lowest number first. */ # define NR_BASES 3 # define BASE_LOCAL 0 # define BASE_GLOBAL 1 # define BASE_DEF 2 #else # define NR_BASES 1 # define BASE_LOCAL 0 # define BASE_GLOBAL 0 # define BASE_DEF 0 #endif /** * struct timer_base - Per CPU timer base (number of base depends on config) * @lock: Lock protecting the timer_base * @running_timer: When expiring timers, the lock is dropped. To make * sure not to race against deleting/modifying a * currently running timer, the pointer is set to the * timer, which expires at the moment. If no timer is * running, the pointer is NULL. * @expiry_lock: PREEMPT_RT only: Lock is taken in softirq around * timer expiry callback execution and when trying to * delete a running timer and it wasn't successful in * the first glance. It prevents priority inversion * when callback was preempted on a remote CPU and a * caller tries to delete the running timer. It also * prevents a life lock, when the task which tries to * delete a timer preempted the softirq thread which * is running the timer callback function. * @timer_waiters: PREEMPT_RT only: Tells, if there is a waiter * waiting for the end of the timer callback function * execution. * @clk: clock of the timer base; is updated before enqueue * of a timer; during expiry, it is 1 offset ahead of * jiffies to avoid endless requeuing to current * jiffies * @next_expiry: expiry value of the first timer; it is updated when * finding the next timer and during enqueue; the * value is not valid, when next_expiry_recalc is set * @cpu: Number of CPU the timer base belongs to * @next_expiry_recalc: States, whether a recalculation of next_expiry is * required. Value is set true, when a timer was * deleted. * @is_idle: Is set, when timer_base is idle. It is triggered by NOHZ * code. This state is only used in standard * base. Deferrable timers, which are enqueued remotely * never wake up an idle CPU. So no matter of supporting it * for this base. * @timers_pending: Is set, when a timer is pending in the base. It is only * reliable when next_expiry_recalc is not set. * @pending_map: bitmap of the timer wheel; each bit reflects a * bucket of the wheel. When a bit is set, at least a * single timer is enqueued in the related bucket. * @vectors: Array of lists; Each array member reflects a bucket * of the timer wheel. The list contains all timers * which are enqueued into a specific bucket. */ struct timer_base { raw_spinlock_t lock; struct timer_list *running_timer; #ifdef CONFIG_PREEMPT_RT spinlock_t expiry_lock; atomic_t timer_waiters; #endif unsigned long clk; unsigned long next_expiry; unsigned int cpu; bool next_expiry_recalc; bool is_idle; bool timers_pending; DECLARE_BITMAP(pending_map, WHEEL_SIZE); struct hlist_head vectors[WHEEL_SIZE]; } ____cacheline_aligned; static DEFINE_PER_CPU(struct timer_base, timer_bases[NR_BASES]); #ifdef CONFIG_NO_HZ_COMMON static DEFINE_STATIC_KEY_FALSE(timers_nohz_active); static DEFINE_MUTEX(timer_keys_mutex); static void timer_update_keys(struct work_struct *work); static DECLARE_WORK(timer_update_work, timer_update_keys); #ifdef CONFIG_SMP static unsigned int sysctl_timer_migration = 1; DEFINE_STATIC_KEY_FALSE(timers_migration_enabled); static void timers_update_migration(void) { if (sysctl_timer_migration && tick_nohz_active) static_branch_enable(&timers_migration_enabled); else static_branch_disable(&timers_migration_enabled); } #ifdef CONFIG_SYSCTL static int timer_migration_handler(const struct ctl_table *table, int write, void *buffer, size_t *lenp, loff_t *ppos) { int ret; mutex_lock(&timer_keys_mutex); ret = proc_dointvec_minmax(table, write, buffer, lenp, ppos); if (!ret && write) timers_update_migration(); mutex_unlock(&timer_keys_mutex); return ret; } static const struct ctl_table timer_sysctl[] = { { .procname = "timer_migration", .data = &sysctl_timer_migration, .maxlen = sizeof(unsigned int), .mode = 0644, .proc_handler = timer_migration_handler, .extra1 = SYSCTL_ZERO, .extra2 = SYSCTL_ONE, }, }; static int __init timer_sysctl_init(void) { register_sysctl("kernel", timer_sysctl); return 0; } device_initcall(timer_sysctl_init); #endif /* CONFIG_SYSCTL */ #else /* CONFIG_SMP */ static inline void timers_update_migration(void) { } #endif /* !CONFIG_SMP */ static void timer_update_keys(struct work_struct *work) { mutex_lock(&timer_keys_mutex); timers_update_migration(); static_branch_enable(&timers_nohz_active); mutex_unlock(&timer_keys_mutex); } void timers_update_nohz(void) { schedule_work(&timer_update_work); } static inline bool is_timers_nohz_active(void) { return static_branch_unlikely(&timers_nohz_active); } #else static inline bool is_timers_nohz_active(void) { return false; } #endif /* NO_HZ_COMMON */ static unsigned long round_jiffies_common(unsigned long j, int cpu, bool force_up) { int rem; unsigned long original = j; /* * We don't want all cpus firing their timers at once hitting the * same lock or cachelines, so we skew each extra cpu with an extra * 3 jiffies. This 3 jiffies came originally from the mm/ code which * already did this. * The skew is done by adding 3*cpunr, then round, then subtract this * extra offset again. */ j += cpu * 3; rem = j % HZ; /* * If the target jiffy is just after a whole second (which can happen * due to delays of the timer irq, long irq off times etc etc) then * we should round down to the whole second, not up. Use 1/4th second * as cutoff for this rounding as an extreme upper bound for this. * But never round down if @force_up is set. */ if (rem < HZ/4 && !force_up) /* round down */ j = j - rem; else /* round up */ j = j - rem + HZ; /* now that we have rounded, subtract the extra skew again */ j -= cpu * 3; /* * Make sure j is still in the future. Otherwise return the * unmodified value. */ return time_is_after_jiffies(j) ? j : original; } /** * __round_jiffies - function to round jiffies to a full second * @j: the time in (absolute) jiffies that should be rounded * @cpu: the processor number on which the timeout will happen * * __round_jiffies() rounds an absolute time in the future (in jiffies) * up or down to (approximately) full seconds. This is useful for timers * for which the exact time they fire does not matter too much, as long as * they fire approximately every X seconds. * * By rounding these timers to whole seconds, all such timers will fire * at the same time, rather than at various times spread out. The goal * of this is to have the CPU wake up less, which saves power. * * The exact rounding is skewed for each processor to avoid all * processors firing at the exact same time, which could lead * to lock contention or spurious cache line bouncing. * * The return value is the rounded version of the @j parameter. */ unsigned long __round_jiffies(unsigned long j, int cpu) { return round_jiffies_common(j, cpu, false); } EXPORT_SYMBOL_GPL(__round_jiffies); /** * __round_jiffies_relative - function to round jiffies to a full second * @j: the time in (relative) jiffies that should be rounded * @cpu: the processor number on which the timeout will happen * * __round_jiffies_relative() rounds a time delta in the future (in jiffies) * up or down to (approximately) full seconds. This is useful for timers * for which the exact time they fire does not matter too much, as long as * they fire approximately every X seconds. * * By rounding these timers to whole seconds, all such timers will fire * at the same time, rather than at various times spread out. The goal * of this is to have the CPU wake up less, which saves power. * * The exact rounding is skewed for each processor to avoid all * processors firing at the exact same time, which could lead * to lock contention or spurious cache line bouncing. * * The return value is the rounded version of the @j parameter. */ unsigned long __round_jiffies_relative(unsigned long j, int cpu) { unsigned long j0 = jiffies; /* Use j0 because jiffies might change while we run */ return round_jiffies_common(j + j0, cpu, false) - j0; } EXPORT_SYMBOL_GPL(__round_jiffies_relative); /** * round_jiffies - function to round jiffies to a full second * @j: the time in (absolute) jiffies that should be rounded * * round_jiffies() rounds an absolute time in the future (in jiffies) * up or down to (approximately) full seconds. This is useful for timers * for which the exact time they fire does not matter too much, as long as * they fire approximately every X seconds. * * By rounding these timers to whole seconds, all such timers will fire * at the same time, rather than at various times spread out. The goal * of this is to have the CPU wake up less, which saves power. * * The return value is the rounded version of the @j parameter. */ unsigned long round_jiffies(unsigned long j) { return round_jiffies_common(j, raw_smp_processor_id(), false); } EXPORT_SYMBOL_GPL(round_jiffies); /** * round_jiffies_relative - function to round jiffies to a full second * @j: the time in (relative) jiffies that should be rounded * * round_jiffies_relative() rounds a time delta in the future (in jiffies) * up or down to (approximately) full seconds. This is useful for timers * for which the exact time they fire does not matter too much, as long as * they fire approximately every X seconds. * * By rounding these timers to whole seconds, all such timers will fire * at the same time, rather than at various times spread out. The goal * of this is to have the CPU wake up less, which saves power. * * The return value is the rounded version of the @j parameter. */ unsigned long round_jiffies_relative(unsigned long j) { return __round_jiffies_relative(j, raw_smp_processor_id()); } EXPORT_SYMBOL_GPL(round_jiffies_relative); /** * __round_jiffies_up - function to round jiffies up to a full second * @j: the time in (absolute) jiffies that should be rounded * @cpu: the processor number on which the timeout will happen * * This is the same as __round_jiffies() except that it will never * round down. This is useful for timeouts for which the exact time * of firing does not matter too much, as long as they don't fire too * early. */ unsigned long __round_jiffies_up(unsigned long j, int cpu) { return round_jiffies_common(j, cpu, true); } EXPORT_SYMBOL_GPL(__round_jiffies_up); /** * __round_jiffies_up_relative - function to round jiffies up to a full second * @j: the time in (relative) jiffies that should be rounded * @cpu: the processor number on which the timeout will happen * * This is the same as __round_jiffies_relative() except that it will never * round down. This is useful for timeouts for which the exact time * of firing does not matter too much, as long as they don't fire too * early. */ unsigned long __round_jiffies_up_relative(unsigned long j, int cpu) { unsigned long j0 = jiffies; /* Use j0 because jiffies might change while we run */ return round_jiffies_common(j + j0, cpu, true) - j0; } EXPORT_SYMBOL_GPL(__round_jiffies_up_relative); /** * round_jiffies_up - function to round jiffies up to a full second * @j: the time in (absolute) jiffies that should be rounded * * This is the same as round_jiffies() except that it will never * round down. This is useful for timeouts for which the exact time * of firing does not matter too much, as long as they don't fire too * early. */ unsigned long round_jiffies_up(unsigned long j) { return round_jiffies_common(j, raw_smp_processor_id(), true); } EXPORT_SYMBOL_GPL(round_jiffies_up); /** * round_jiffies_up_relative - function to round jiffies up to a full second * @j: the time in (relative) jiffies that should be rounded * * This is the same as round_jiffies_relative() except that it will never * round down. This is useful for timeouts for which the exact time * of firing does not matter too much, as long as they don't fire too * early. */ unsigned long round_jiffies_up_relative(unsigned long j) { return __round_jiffies_up_relative(j, raw_smp_processor_id()); } EXPORT_SYMBOL_GPL(round_jiffies_up_relative); static inline unsigned int timer_get_idx(struct timer_list *timer) { return (timer->flags & TIMER_ARRAYMASK) >> TIMER_ARRAYSHIFT; } static inline void timer_set_idx(struct timer_list *timer, unsigned int idx) { timer->flags = (timer->flags & ~TIMER_ARRAYMASK) | idx << TIMER_ARRAYSHIFT; } /* * Helper function to calculate the array index for a given expiry * time. */ static inline unsigned calc_index(unsigned long expires, unsigned lvl, unsigned long *bucket_expiry) { /* * The timer wheel has to guarantee that a timer does not fire * early. Early expiry can happen due to: * - Timer is armed at the edge of a tick * - Truncation of the expiry time in the outer wheel levels * * Round up with level granularity to prevent this. */ expires = (expires >> LVL_SHIFT(lvl)) + 1; *bucket_expiry = expires << LVL_SHIFT(lvl); return LVL_OFFS(lvl) + (expires & LVL_MASK); } static int calc_wheel_index(unsigned long expires, unsigned long clk, unsigned long *bucket_expiry) { unsigned long delta = expires - clk; unsigned int idx; if (delta < LVL_START(1)) { idx = calc_index(expires, 0, bucket_expiry); } else if (delta < LVL_START(2)) { idx = calc_index(expires, 1, bucket_expiry); } else if (delta < LVL_START(3)) { idx = calc_index(expires, 2, bucket_expiry); } else if (delta < LVL_START(4)) { idx = calc_index(expires, 3, bucket_expiry); } else if (delta < LVL_START(5)) { idx = calc_index(expires, 4, bucket_expiry); } else if (delta < LVL_START(6)) { idx = calc_index(expires, 5, bucket_expiry); } else if (delta < LVL_START(7)) { idx = calc_index(expires, 6, bucket_expiry); } else if (LVL_DEPTH > 8 && delta < LVL_START(8)) { idx = calc_index(expires, 7, bucket_expiry); } else if ((long) delta < 0) { idx = clk & LVL_MASK; *bucket_expiry = clk; } else { /* * Force expire obscene large timeouts to expire at the * capacity limit of the wheel. */ if (delta >= WHEEL_TIMEOUT_CUTOFF) expires = clk + WHEEL_TIMEOUT_MAX; idx = calc_index(expires, LVL_DEPTH - 1, bucket_expiry); } return idx; } static void trigger_dyntick_cpu(struct timer_base *base, struct timer_list *timer) { /* * Deferrable timers do not prevent the CPU from entering dynticks and * are not taken into account on the idle/nohz_full path. An IPI when a * new deferrable timer is enqueued will wake up the remote CPU but * nothing will be done with the deferrable timer base. Therefore skip * the remote IPI for deferrable timers completely. */ if (!is_timers_nohz_active() || timer->flags & TIMER_DEFERRABLE) return; /* * We might have to IPI the remote CPU if the base is idle and the * timer is pinned. If it is a non pinned timer, it is only queued * on the remote CPU, when timer was running during queueing. Then * everything is handled by remote CPU anyway. If the other CPU is * on the way to idle then it can't set base->is_idle as we hold * the base lock: */ if (base->is_idle) { WARN_ON_ONCE(!(timer->flags & TIMER_PINNED || tick_nohz_full_cpu(base->cpu))); wake_up_nohz_cpu(base->cpu); } } /* * Enqueue the timer into the hash bucket, mark it pending in * the bitmap, store the index in the timer flags then wake up * the target CPU if needed. */ static void enqueue_timer(struct timer_base *base, struct timer_list *timer, unsigned int idx, unsigned long bucket_expiry) { hlist_add_head(&timer->entry, base->vectors + idx); __set_bit(idx, base->pending_map); timer_set_idx(timer, idx); trace_timer_start(timer, bucket_expiry); /* * Check whether this is the new first expiring timer. The * effective expiry time of the timer is required here * (bucket_expiry) instead of timer->expires. */ if (time_before(bucket_expiry, base->next_expiry)) { /* * Set the next expiry time and kick the CPU so it * can reevaluate the wheel: */ WRITE_ONCE(base->next_expiry, bucket_expiry); base->timers_pending = true; base->next_expiry_recalc = false; trigger_dyntick_cpu(base, timer); } } static void internal_add_timer(struct timer_base *base, struct timer_list *timer) { unsigned long bucket_expiry; unsigned int idx; idx = calc_wheel_index(timer->expires, base->clk, &bucket_expiry); enqueue_timer(base, timer, idx, bucket_expiry); } #ifdef CONFIG_DEBUG_OBJECTS_TIMERS static const struct debug_obj_descr timer_debug_descr; struct timer_hint { void (*function)(struct timer_list *t); long offset; }; #define TIMER_HINT(fn, container, timr, hintfn) \ { \ .function = fn, \ .offset = offsetof(container, hintfn) - \ offsetof(container, timr) \ } static const struct timer_hint timer_hints[] = { TIMER_HINT(delayed_work_timer_fn, struct delayed_work, timer, work.func), TIMER_HINT(kthread_delayed_work_timer_fn, struct kthread_delayed_work, timer, work.func), }; static void *timer_debug_hint(void *addr) { struct timer_list *timer = addr; int i; for (i = 0; i < ARRAY_SIZE(timer_hints); i++) { if (timer_hints[i].function == timer->function) { void (**fn)(void) = addr + timer_hints[i].offset; return *fn; } } return timer->function; } static bool timer_is_static_object(void *addr) { struct timer_list *timer = addr; return (timer->entry.pprev == NULL && timer->entry.next == TIMER_ENTRY_STATIC); } /* * timer_fixup_init is called when: * - an active object is initialized */ static bool timer_fixup_init(void *addr, enum debug_obj_state state) { struct timer_list *timer = addr; switch (state) { case ODEBUG_STATE_ACTIVE: timer_delete_sync(timer); debug_object_init(timer, &timer_debug_descr); return true; default: return false; } } /* Stub timer callback for improperly used timers. */ static void stub_timer(struct timer_list *unused) { WARN_ON(1); } /* * timer_fixup_activate is called when: * - an active object is activated * - an unknown non-static object is activated */ static bool timer_fixup_activate(void *addr, enum debug_obj_state state) { struct timer_list *timer = addr; switch (state) { case ODEBUG_STATE_NOTAVAILABLE: timer_setup(timer, stub_timer, 0); return true; case ODEBUG_STATE_ACTIVE: WARN_ON(1); fallthrough; default: return false; } } /* * timer_fixup_free is called when: * - an active object is freed */ static bool timer_fixup_free(void *addr, enum debug_obj_state state) { struct timer_list *timer = addr; switch (state) { case ODEBUG_STATE_ACTIVE: timer_delete_sync(timer); debug_object_free(timer, &timer_debug_descr); return true; default: return false; } } /* * timer_fixup_assert_init is called when: * - an untracked/uninit-ed object is found */ static bool timer_fixup_assert_init(void *addr, enum debug_obj_state state) { struct timer_list *timer = addr; switch (state) { case ODEBUG_STATE_NOTAVAILABLE: timer_setup(timer, stub_timer, 0); return true; default: return false; } } static const struct debug_obj_descr timer_debug_descr = { .name = "timer_list", .debug_hint = timer_debug_hint, .is_static_object = timer_is_static_object, .fixup_init = timer_fixup_init, .fixup_activate = timer_fixup_activate, .fixup_free = timer_fixup_free, .fixup_assert_init = timer_fixup_assert_init, }; static inline void debug_timer_init(struct timer_list *timer) { debug_object_init(timer, &timer_debug_descr); } static inline void debug_timer_activate(struct timer_list *timer) { debug_object_activate(timer, &timer_debug_descr); } static inline void debug_timer_deactivate(struct timer_list *timer) { debug_object_deactivate(timer, &timer_debug_descr); } static inline void debug_timer_assert_init(struct timer_list *timer) { debug_object_assert_init(timer, &timer_debug_descr); } static void do_init_timer(struct timer_list *timer, void (*func)(struct timer_list *), unsigned int flags, const char *name, struct lock_class_key *key); void init_timer_on_stack_key(struct timer_list *timer, void (*func)(struct timer_list *), unsigned int flags, const char *name, struct lock_class_key *key) { debug_object_init_on_stack(timer, &timer_debug_descr); do_init_timer(timer, func, flags, name, key); } EXPORT_SYMBOL_GPL(init_timer_on_stack_key); void destroy_timer_on_stack(struct timer_list *timer) { debug_object_free(timer, &timer_debug_descr); } EXPORT_SYMBOL_GPL(destroy_timer_on_stack); #else static inline void debug_timer_init(struct timer_list *timer) { } static inline void debug_timer_activate(struct timer_list *timer) { } static inline void debug_timer_deactivate(struct timer_list *timer) { } static inline void debug_timer_assert_init(struct timer_list *timer) { } #endif static inline void debug_init(struct timer_list *timer) { debug_timer_init(timer); trace_timer_init(timer); } static inline void debug_deactivate(struct timer_list *timer) { debug_timer_deactivate(timer); trace_timer_cancel(timer); } static inline void debug_assert_init(struct timer_list *timer) { debug_timer_assert_init(timer); } static void do_init_timer(struct timer_list *timer, void (*func)(struct timer_list *), unsigned int flags, const char *name, struct lock_class_key *key) { timer->entry.pprev = NULL; timer->function = func; if (WARN_ON_ONCE(flags & ~TIMER_INIT_FLAGS)) flags &= TIMER_INIT_FLAGS; timer->flags = flags | raw_smp_processor_id(); lockdep_init_map(&timer->lockdep_map, name, key, 0); } /** * init_timer_key - initialize a timer * @timer: the timer to be initialized * @func: timer callback function * @flags: timer flags * @name: name of the timer * @key: lockdep class key of the fake lock used for tracking timer * sync lock dependencies * * init_timer_key() must be done to a timer prior to calling *any* of the * other timer functions. */ void init_timer_key(struct timer_list *timer, void (*func)(struct timer_list *), unsigned int flags, const char *name, struct lock_class_key *key) { debug_init(timer); do_init_timer(timer, func, flags, name, key); } EXPORT_SYMBOL(init_timer_key); static inline void detach_timer(struct timer_list *timer, bool clear_pending) { struct hlist_node *entry = &timer->entry; debug_deactivate(timer); __hlist_del(entry); if (clear_pending) entry->pprev = NULL; entry->next = LIST_POISON2; } static int detach_if_pending(struct timer_list *timer, struct timer_base *base, bool clear_pending) { unsigned idx = timer_get_idx(timer); if (!timer_pending(timer)) return 0; if (hlist_is_singular_node(&timer->entry, base->vectors + idx)) { __clear_bit(idx, base->pending_map); base->next_expiry_recalc = true; } detach_timer(timer, clear_pending); return 1; } static inline struct timer_base *get_timer_cpu_base(u32 tflags, u32 cpu) { int index = tflags & TIMER_PINNED ? BASE_LOCAL : BASE_GLOBAL; /* * If the timer is deferrable and NO_HZ_COMMON is set then we need * to use the deferrable base. */ if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && (tflags & TIMER_DEFERRABLE)) index = BASE_DEF; return per_cpu_ptr(&timer_bases[index], cpu); } static inline struct timer_base *get_timer_this_cpu_base(u32 tflags) { int index = tflags & TIMER_PINNED ? BASE_LOCAL : BASE_GLOBAL; /* * If the timer is deferrable and NO_HZ_COMMON is set then we need * to use the deferrable base. */ if (IS_ENABLED(CONFIG_NO_HZ_COMMON) && (tflags & TIMER_DEFERRABLE)) index = BASE_DEF; return this_cpu_ptr(&timer_bases[index]); } static inline struct timer_base *get_timer_base(u32 tflags) { return get_timer_cpu_base(tflags, tflags & TIMER_CPUMASK); } static inline void __forward_timer_base(struct timer_base *base, unsigned long basej) { /* * Check whether we can forward the base. We can only do that when * @basej is past base->clk otherwise we might rewind base->clk. */ if (time_before_eq(basej, base->clk)) return; /* * If the next expiry value is > jiffies, then we fast forward to * jiffies otherwise we forward to the next expiry value. */ if (time_after(base->next_expiry, basej)) { base->clk = basej; } else { if (WARN_ON_ONCE(time_before(base->next_expiry, base->clk))) return; base->clk = base->next_expiry; } } static inline void forward_timer_base(struct timer_base *base) { __forward_timer_base(base, READ_ONCE(jiffies)); } /* * We are using hashed locking: Holding per_cpu(timer_bases[x]).lock means * that all timers which are tied to this base are locked, and the base itself * is locked too. * * So __run_timers/migrate_timers can safely modify all timers which could * be found in the base->vectors array. * * When a timer is migrating then the TIMER_MIGRATING flag is set and we need * to wait until the migration is done. */ static struct timer_base *lock_timer_base(struct timer_list *timer, unsigned long *flags) __acquires(timer->base->lock) { for (;;) { struct timer_base *base; u32 tf; /* * We need to use READ_ONCE() here, otherwise the compiler * might re-read @tf between the check for TIMER_MIGRATING * and spin_lock(). */ tf = READ_ONCE(timer->flags); if (!(tf & TIMER_MIGRATING)) { base = get_timer_base(tf); raw_spin_lock_irqsave(&base->lock, *flags); if (timer->flags == tf) return base; raw_spin_unlock_irqrestore(&base->lock, *flags); } cpu_relax(); } } #define MOD_TIMER_PENDING_ONLY 0x01 #define MOD_TIMER_REDUCE 0x02 #define MOD_TIMER_NOTPENDING 0x04 static inline int __mod_timer(struct timer_list *timer, unsigned long expires, unsigned int options) { unsigned long clk = 0, flags, bucket_expiry; struct timer_base *base, *new_base; unsigned int idx = UINT_MAX; int ret = 0; debug_assert_init(timer); /* * This is a common optimization triggered by the networking code - if * the timer is re-modified to have the same timeout or ends up in the * same array bucket then just return: */ if (!(options & MOD_TIMER_NOTPENDING) && timer_pending(timer)) { /* * The downside of this optimization is that it can result in * larger granularity than you would get from adding a new * timer with this expiry. */ long diff = timer->expires - expires; if (!diff) return 1; if (options & MOD_TIMER_REDUCE && diff <= 0) return 1; /* * We lock timer base and calculate the bucket index right * here. If the timer ends up in the same bucket, then we * just update the expiry time and avoid the whole * dequeue/enqueue dance. */ base = lock_timer_base(timer, &flags); /* * Has @timer been shutdown? This needs to be evaluated * while holding base lock to prevent a race against the * shutdown code. */ if (!timer->function) goto out_unlock; forward_timer_base(base); if (timer_pending(timer) && (options & MOD_TIMER_REDUCE) && time_before_eq(timer->expires, expires)) { ret = 1; goto out_unlock; } clk = base->clk; idx = calc_wheel_index(expires, clk, &bucket_expiry); /* * Retrieve and compare the array index of the pending * timer. If it matches set the expiry to the new value so a * subsequent call will exit in the expires check above. */ if (idx == timer_get_idx(timer)) { if (!(options & MOD_TIMER_REDUCE)) timer->expires = expires; else if (time_after(timer->expires, expires)) timer->expires = expires; ret = 1; goto out_unlock; } } else { base = lock_timer_base(timer, &flags); /* * Has @timer been shutdown? This needs to be evaluated * while holding base lock to prevent a race against the * shutdown code. */ if (!timer->function) goto out_unlock; forward_timer_base(base); } ret = detach_if_pending(timer, base, false); if (!ret && (options & MOD_TIMER_PENDING_ONLY)) goto out_unlock; new_base = get_timer_this_cpu_base(timer->flags); if (base != new_base) { /* * We are trying to schedule the timer on the new base. * However we can't change timer's base while it is running, * otherwise timer_delete_sync() can't detect that the timer's * handler yet has not finished. This also guarantees that the * timer is serialized wrt itself. */ if (likely(base->running_timer != timer)) { /* See the comment in lock_timer_base() */ timer->flags |= TIMER_MIGRATING; raw_spin_unlock(&base->lock); base = new_base; raw_spin_lock(&base->lock); WRITE_ONCE(timer->flags, (timer->flags & ~TIMER_BASEMASK) | base->cpu); forward_timer_base(base); } } debug_timer_activate(timer); timer->expires = expires; /* * If 'idx' was calculated above and the base time did not advance * between calculating 'idx' and possibly switching the base, only * enqueue_timer() is required. Otherwise we need to (re)calculate * the wheel index via internal_add_timer(). */ if (idx != UINT_MAX && clk == base->clk) enqueue_timer(base, timer, idx, bucket_expiry); else internal_add_timer(base, timer); out_unlock: raw_spin_unlock_irqrestore(&base->lock, flags); return ret; } /** * mod_timer_pending - Modify a pending timer's timeout * @timer: The pending timer to be modified * @expires: New absolute timeout in jiffies * * mod_timer_pending() is the same for pending timers as mod_timer(), but * will not activate inactive timers. * * If @timer->function == NULL then the start operation is silently * discarded. * * Return: * * %0 - The timer was inactive and not modified or was in * shutdown state and the operation was discarded * * %1 - The timer was active and requeued to expire at @expires */ int mod_timer_pending(struct timer_list *timer, unsigned long expires) { return __mod_timer(timer, expires, MOD_TIMER_PENDING_ONLY); } EXPORT_SYMBOL(mod_timer_pending); /** * mod_timer - Modify a timer's timeout * @timer: The timer to be modified * @expires: New absolute timeout in jiffies * * mod_timer(timer, expires) is equivalent to: * * timer_delete(timer); timer->expires = expires; add_timer(timer); * * mod_timer() is more efficient than the above open coded sequence. In * case that the timer is inactive, the timer_delete() part is a NOP. The * timer is in any case activated with the new expiry time @expires. * * Note that if there are multiple unserialized concurrent users of the * same timer, then mod_timer() is the only safe way to modify the timeout, * since add_timer() cannot modify an already running timer. * * If @timer->function == NULL then the start operation is silently * discarded. In this case the return value is 0 and meaningless. * * Return: * * %0 - The timer was inactive and started or was in shutdown * state and the operation was discarded * * %1 - The timer was active and requeued to expire at @expires or * the timer was active and not modified because @expires did * not change the effective expiry time */ int mod_timer(struct timer_list *timer, unsigned long expires) { return __mod_timer(timer, expires, 0); } EXPORT_SYMBOL(mod_timer); /** * timer_reduce - Modify a timer's timeout if it would reduce the timeout * @timer: The timer to be modified * @expires: New absolute timeout in jiffies * * timer_reduce() is very similar to mod_timer(), except that it will only * modify an enqueued timer if that would reduce the expiration time. If * @timer is not enqueued it starts the timer. * * If @timer->function == NULL then the start operation is silently * discarded. * * Return: * * %0 - The timer was inactive and started or was in shutdown * state and the operation was discarded * * %1 - The timer was active and requeued to expire at @expires or * the timer was active and not modified because @expires * did not change the effective expiry time such that the * timer would expire earlier than already scheduled */ int timer_reduce(struct timer_list *timer, unsigned long expires) { return __mod_timer(timer, expires, MOD_TIMER_REDUCE); } EXPORT_SYMBOL(timer_reduce); /** * add_timer - Start a timer * @timer: The timer to be started * * Start @timer to expire at @timer->expires in the future. @timer->expires * is the absolute expiry time measured in 'jiffies'. When the timer expires * timer->function(timer) will be invoked from soft interrupt context. * * The @timer->expires and @timer->function fields must be set prior * to calling this function. * * If @timer->function == NULL then the start operation is silently * discarded. * * If @timer->expires is already in the past @timer will be queued to * expire at the next timer tick. * * This can only operate on an inactive timer. Attempts to invoke this on * an active timer are rejected with a warning. */ void add_timer(struct timer_list *timer) { if (WARN_ON_ONCE(timer_pending(timer))) return; __mod_timer(timer, timer->expires, MOD_TIMER_NOTPENDING); } EXPORT_SYMBOL(add_timer); /** * add_timer_local() - Start a timer on the local CPU * @timer: The timer to be started * * Same as add_timer() except that the timer flag TIMER_PINNED is set. * * See add_timer() for further details. */ void add_timer_local(struct timer_list *timer) { if (WARN_ON_ONCE(timer_pending(timer))) return; timer->flags |= TIMER_PINNED; __mod_timer(timer, timer->expires, MOD_TIMER_NOTPENDING); } EXPORT_SYMBOL(add_timer_local); /** * add_timer_global() - Start a timer without TIMER_PINNED flag set * @timer: The timer to be started * * Same as add_timer() except that the timer flag TIMER_PINNED is unset. * * See add_timer() for further details. */ void add_timer_global(struct timer_list *timer) { if (WARN_ON_ONCE(timer_pending(timer))) return; timer->flags &= ~TIMER_PINNED; __mod_timer(timer, timer->expires, MOD_TIMER_NOTPENDING); } EXPORT_SYMBOL(add_timer_global); /** * add_timer_on - Start a timer on a particular CPU * @timer: The timer to be started * @cpu: The CPU to start it on * * Same as add_timer() except that it starts the timer on the given CPU and * the TIMER_PINNED flag is set. When timer shouldn't be a pinned timer in * the next round, add_timer_global() should be used instead as it unsets * the TIMER_PINNED flag. * * See add_timer() for further details. */ void add_timer_on(struct timer_list *timer, int cpu) { struct timer_base *new_base, *base; unsigned long flags; debug_assert_init(timer); if (WARN_ON_ONCE(timer_pending(timer))) return; /* Make sure timer flags have TIMER_PINNED flag set */ timer->flags |= TIMER_PINNED; new_base = get_timer_cpu_base(timer->flags, cpu); /* * If @timer was on a different CPU, it should be migrated with the * old base locked to prevent other operations proceeding with the * wrong base locked. See lock_timer_base(). */ base = lock_timer_base(timer, &flags); /* * Has @timer been shutdown? This needs to be evaluated while * holding base lock to prevent a race against the shutdown code. */ if (!timer->function) goto out_unlock; if (base != new_base) { timer->flags |= TIMER_MIGRATING; raw_spin_unlock(&base->lock); base = new_base; raw_spin_lock(&base->lock); WRITE_ONCE(timer->flags, (timer->flags & ~TIMER_BASEMASK) | cpu); } forward_timer_base(base); debug_timer_activate(timer); internal_add_timer(base, timer); out_unlock: raw_spin_unlock_irqrestore(&base->lock, flags); } EXPORT_SYMBOL_GPL(add_timer_on); /** * __timer_delete - Internal function: Deactivate a timer * @timer: The timer to be deactivated * @shutdown: If true, this indicates that the timer is about to be * shutdown permanently. * * If @shutdown is true then @timer->function is set to NULL under the * timer base lock which prevents further rearming of the time. In that * case any attempt to rearm @timer after this function returns will be * silently ignored. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ static int __timer_delete(struct timer_list *timer, bool shutdown) { struct timer_base *base; unsigned long flags; int ret = 0; debug_assert_init(timer); /* * If @shutdown is set then the lock has to be taken whether the * timer is pending or not to protect against a concurrent rearm * which might hit between the lockless pending check and the lock * acquisition. By taking the lock it is ensured that such a newly * enqueued timer is dequeued and cannot end up with * timer->function == NULL in the expiry code. * * If timer->function is currently executed, then this makes sure * that the callback cannot requeue the timer. */ if (timer_pending(timer) || shutdown) { base = lock_timer_base(timer, &flags); ret = detach_if_pending(timer, base, true); if (shutdown) timer->function = NULL; raw_spin_unlock_irqrestore(&base->lock, flags); } return ret; } /** * timer_delete - Deactivate a timer * @timer: The timer to be deactivated * * The function only deactivates a pending timer, but contrary to * timer_delete_sync() it does not take into account whether the timer's * callback function is concurrently executed on a different CPU or not. * It neither prevents rearming of the timer. If @timer can be rearmed * concurrently then the return value of this function is meaningless. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ int timer_delete(struct timer_list *timer) { return __timer_delete(timer, false); } EXPORT_SYMBOL(timer_delete); /** * timer_shutdown - Deactivate a timer and prevent rearming * @timer: The timer to be deactivated * * The function does not wait for an eventually running timer callback on a * different CPU but it prevents rearming of the timer. Any attempt to arm * @timer after this function returns will be silently ignored. * * This function is useful for teardown code and should only be used when * timer_shutdown_sync() cannot be invoked due to locking or context constraints. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending */ int timer_shutdown(struct timer_list *timer) { return __timer_delete(timer, true); } EXPORT_SYMBOL_GPL(timer_shutdown); /** * __try_to_del_timer_sync - Internal function: Try to deactivate a timer * @timer: Timer to deactivate * @shutdown: If true, this indicates that the timer is about to be * shutdown permanently. * * If @shutdown is true then @timer->function is set to NULL under the * timer base lock which prevents further rearming of the timer. Any * attempt to rearm @timer after this function returns will be silently * ignored. * * This function cannot guarantee that the timer cannot be rearmed * right after dropping the base lock if @shutdown is false. That * needs to be prevented by the calling code if necessary. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated * * %-1 - The timer callback function is running on a different CPU */ static int __try_to_del_timer_sync(struct timer_list *timer, bool shutdown) { struct timer_base *base; unsigned long flags; int ret = -1; debug_assert_init(timer); base = lock_timer_base(timer, &flags); if (base->running_timer != timer) ret = detach_if_pending(timer, base, true); if (shutdown) timer->function = NULL; raw_spin_unlock_irqrestore(&base->lock, flags); return ret; } /** * try_to_del_timer_sync - Try to deactivate a timer * @timer: Timer to deactivate * * This function tries to deactivate a timer. On success the timer is not * queued and the timer callback function is not running on any CPU. * * This function does not guarantee that the timer cannot be rearmed right * after dropping the base lock. That needs to be prevented by the calling * code if necessary. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated * * %-1 - The timer callback function is running on a different CPU */ int try_to_del_timer_sync(struct timer_list *timer) { return __try_to_del_timer_sync(timer, false); } EXPORT_SYMBOL(try_to_del_timer_sync); #ifdef CONFIG_PREEMPT_RT static __init void timer_base_init_expiry_lock(struct timer_base *base) { spin_lock_init(&base->expiry_lock); } static inline void timer_base_lock_expiry(struct timer_base *base) { spin_lock(&base->expiry_lock); } static inline void timer_base_unlock_expiry(struct timer_base *base) { spin_unlock(&base->expiry_lock); } /* * The counterpart to del_timer_wait_running(). * * If there is a waiter for base->expiry_lock, then it was waiting for the * timer callback to finish. Drop expiry_lock and reacquire it. That allows * the waiter to acquire the lock and make progress. */ static void timer_sync_wait_running(struct timer_base *base) __releases(&base->lock) __releases(&base->expiry_lock) __acquires(&base->expiry_lock) __acquires(&base->lock) { if (atomic_read(&base->timer_waiters)) { raw_spin_unlock_irq(&base->lock); spin_unlock(&base->expiry_lock); spin_lock(&base->expiry_lock); raw_spin_lock_irq(&base->lock); } } /* * This function is called on PREEMPT_RT kernels when the fast path * deletion of a timer failed because the timer callback function was * running. * * This prevents priority inversion, if the softirq thread on a remote CPU * got preempted, and it prevents a life lock when the task which tries to * delete a timer preempted the softirq thread running the timer callback * function. */ static void del_timer_wait_running(struct timer_list *timer) { u32 tf; tf = READ_ONCE(timer->flags); if (!(tf & (TIMER_MIGRATING | TIMER_IRQSAFE))) { struct timer_base *base = get_timer_base(tf); /* * Mark the base as contended and grab the expiry lock, * which is held by the softirq across the timer * callback. Drop the lock immediately so the softirq can * expire the next timer. In theory the timer could already * be running again, but that's more than unlikely and just * causes another wait loop. */ atomic_inc(&base->timer_waiters); spin_lock_bh(&base->expiry_lock); atomic_dec(&base->timer_waiters); spin_unlock_bh(&base->expiry_lock); } } #else static inline void timer_base_init_expiry_lock(struct timer_base *base) { } static inline void timer_base_lock_expiry(struct timer_base *base) { } static inline void timer_base_unlock_expiry(struct timer_base *base) { } static inline void timer_sync_wait_running(struct timer_base *base) { } static inline void del_timer_wait_running(struct timer_list *timer) { } #endif /** * __timer_delete_sync - Internal function: Deactivate a timer and wait * for the handler to finish. * @timer: The timer to be deactivated * @shutdown: If true, @timer->function will be set to NULL under the * timer base lock which prevents rearming of @timer * * If @shutdown is not set the timer can be rearmed later. If the timer can * be rearmed concurrently, i.e. after dropping the base lock then the * return value is meaningless. * * If @shutdown is set then @timer->function is set to NULL under timer * base lock which prevents rearming of the timer. Any attempt to rearm * a shutdown timer is silently ignored. * * If the timer should be reused after shutdown it has to be initialized * again. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ static int __timer_delete_sync(struct timer_list *timer, bool shutdown) { int ret; #ifdef CONFIG_LOCKDEP unsigned long flags; /* * If lockdep gives a backtrace here, please reference * the synchronization rules above. */ local_irq_save(flags); lock_map_acquire(&timer->lockdep_map); lock_map_release(&timer->lockdep_map); local_irq_restore(flags); #endif /* * don't use it in hardirq context, because it * could lead to deadlock. */ WARN_ON(in_hardirq() && !(timer->flags & TIMER_IRQSAFE)); /* * Must be able to sleep on PREEMPT_RT because of the slowpath in * del_timer_wait_running(). */ if (IS_ENABLED(CONFIG_PREEMPT_RT) && !(timer->flags & TIMER_IRQSAFE)) lockdep_assert_preemption_enabled(); do { ret = __try_to_del_timer_sync(timer, shutdown); if (unlikely(ret < 0)) { del_timer_wait_running(timer); cpu_relax(); } } while (ret < 0); return ret; } /** * timer_delete_sync - Deactivate a timer and wait for the handler to finish. * @timer: The timer to be deactivated * * Synchronization rules: Callers must prevent restarting of the timer, * otherwise this function is meaningless. It must not be called from * interrupt contexts unless the timer is an irqsafe one. The caller must * not hold locks which would prevent completion of the timer's callback * function. The timer's handler must not call add_timer_on(). Upon exit * the timer is not queued and the handler is not running on any CPU. * * For !irqsafe timers, the caller must not hold locks that are held in * interrupt context. Even if the lock has nothing to do with the timer in * question. Here's why:: * * CPU0 CPU1 * ---- ---- * <SOFTIRQ> * call_timer_fn(); * base->running_timer = mytimer; * spin_lock_irq(somelock); * <IRQ> * spin_lock(somelock); * timer_delete_sync(mytimer); * while (base->running_timer == mytimer); * * Now timer_delete_sync() will never return and never release somelock. * The interrupt on the other CPU is waiting to grab somelock but it has * interrupted the softirq that CPU0 is waiting to finish. * * This function cannot guarantee that the timer is not rearmed again by * some concurrent or preempting code, right after it dropped the base * lock. If there is the possibility of a concurrent rearm then the return * value of the function is meaningless. * * If such a guarantee is needed, e.g. for teardown situations then use * timer_shutdown_sync() instead. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending and deactivated */ int timer_delete_sync(struct timer_list *timer) { return __timer_delete_sync(timer, false); } EXPORT_SYMBOL(timer_delete_sync); /** * timer_shutdown_sync - Shutdown a timer and prevent rearming * @timer: The timer to be shutdown * * When the function returns it is guaranteed that: * - @timer is not queued * - The callback function of @timer is not running * - @timer cannot be enqueued again. Any attempt to rearm * @timer is silently ignored. * * See timer_delete_sync() for synchronization rules. * * This function is useful for final teardown of an infrastructure where * the timer is subject to a circular dependency problem. * * A common pattern for this is a timer and a workqueue where the timer can * schedule work and work can arm the timer. On shutdown the workqueue must * be destroyed and the timer must be prevented from rearming. Unless the * code has conditionals like 'if (mything->in_shutdown)' to prevent that * there is no way to get this correct with timer_delete_sync(). * * timer_shutdown_sync() is solving the problem. The correct ordering of * calls in this case is: * * timer_shutdown_sync(&mything->timer); * workqueue_destroy(&mything->workqueue); * * After this 'mything' can be safely freed. * * This obviously implies that the timer is not required to be functional * for the rest of the shutdown operation. * * Return: * * %0 - The timer was not pending * * %1 - The timer was pending */ int timer_shutdown_sync(struct timer_list *timer) { return __timer_delete_sync(timer, true); } EXPORT_SYMBOL_GPL(timer_shutdown_sync); static void call_timer_fn(struct timer_list *timer, void (*fn)(struct timer_list *), unsigned long baseclk) { int count = preempt_count(); #ifdef CONFIG_LOCKDEP /* * It is permissible to free the timer from inside the * function that is called from it, this we need to take into * account for lockdep too. To avoid bogus "held lock freed" * warnings as well as problems when looking into * timer->lockdep_map, make a copy and use that here. */ struct lockdep_map lockdep_map; lockdep_copy_map(&lockdep_map, &timer->lockdep_map); #endif /* * Couple the lock chain with the lock chain at * timer_delete_sync() by acquiring the lock_map around the fn() * call here and in timer_delete_sync(). */ lock_map_acquire(&lockdep_map); trace_timer_expire_entry(timer, baseclk); fn(timer); trace_timer_expire_exit(timer); lock_map_release(&lockdep_map); if (count != preempt_count()) { WARN_ONCE(1, "timer: %pS preempt leak: %08x -> %08x\n", fn, count, preempt_count()); /* * Restore the preempt count. That gives us a decent * chance to survive and extract information. If the * callback kept a lock held, bad luck, but not worse * than the BUG() we had. */ preempt_count_set(count); } } static void expire_timers(struct timer_base *base, struct hlist_head *head) { /* * This value is required only for tracing. base->clk was * incremented directly before expire_timers was called. But expiry * is related to the old base->clk value. */ unsigned long baseclk = base->clk - 1; while (!hlist_empty(head)) { struct timer_list *timer; void (*fn)(struct timer_list *); timer = hlist_entry(head->first, struct timer_list, entry); base->running_timer = timer; detach_timer(timer, true); fn = timer->function; if (WARN_ON_ONCE(!fn)) { /* Should never happen. Emphasis on should! */ base->running_timer = NULL; continue; } if (timer->flags & TIMER_IRQSAFE) { raw_spin_unlock(&base->lock); call_timer_fn(timer, fn, baseclk); raw_spin_lock(&base->lock); base->running_timer = NULL; } else { raw_spin_unlock_irq(&base->lock); call_timer_fn(timer, fn, baseclk); raw_spin_lock_irq(&base->lock); base->running_timer = NULL; timer_sync_wait_running(base); } } } static int collect_expired_timers(struct timer_base *base, struct hlist_head *heads) { unsigned long clk = base->clk = base->next_expiry; struct hlist_head *vec; int i, levels = 0; unsigned int idx; for (i = 0; i < LVL_DEPTH; i++) { idx = (clk & LVL_MASK) + i * LVL_SIZE; if (__test_and_clear_bit(idx, base->pending_map)) { vec = base->vectors + idx; hlist_move_list(vec, heads++); levels++; } /* Is it time to look at the next level? */ if (clk & LVL_CLK_MASK) break; /* Shift clock for the next level granularity */ clk >>= LVL_CLK_SHIFT; } return levels; } /* * Find the next pending bucket of a level. Search from level start (@offset) * + @clk upwards and if nothing there, search from start of the level * (@offset) up to @offset + clk. */ static int next_pending_bucket(struct timer_base *base, unsigned offset, unsigned clk) { unsigned pos, start = offset + clk; unsigned end = offset + LVL_SIZE; pos = find_next_bit(base->pending_map, end, start); if (pos < end) return pos - start; pos = find_next_bit(base->pending_map, start, offset); return pos < start ? pos + LVL_SIZE - start : -1; } /* * Search the first expiring timer in the various clock levels. Caller must * hold base->lock. * * Store next expiry time in base->next_expiry. */ static void timer_recalc_next_expiry(struct timer_base *base) { unsigned long clk, next, adj; unsigned lvl, offset = 0; next = base->clk + NEXT_TIMER_MAX_DELTA; clk = base->clk; for (lvl = 0; lvl < LVL_DEPTH; lvl++, offset += LVL_SIZE) { int pos = next_pending_bucket(base, offset, clk & LVL_MASK); unsigned long lvl_clk = clk & LVL_CLK_MASK; if (pos >= 0) { unsigned long tmp = clk + (unsigned long) pos; tmp <<= LVL_SHIFT(lvl); if (time_before(tmp, next)) next = tmp; /* * If the next expiration happens before we reach * the next level, no need to check further. */ if (pos <= ((LVL_CLK_DIV - lvl_clk) & LVL_CLK_MASK)) break; } /* * Clock for the next level. If the current level clock lower * bits are zero, we look at the next level as is. If not we * need to advance it by one because that's going to be the * next expiring bucket in that level. base->clk is the next * expiring jiffy. So in case of: * * LVL5 LVL4 LVL3 LVL2 LVL1 LVL0 * 0 0 0 0 0 0 * * we have to look at all levels @index 0. With * * LVL5 LVL4 LVL3 LVL2 LVL1 LVL0 * 0 0 0 0 0 2 * * LVL0 has the next expiring bucket @index 2. The upper * levels have the next expiring bucket @index 1. * * In case that the propagation wraps the next level the same * rules apply: * * LVL5 LVL4 LVL3 LVL2 LVL1 LVL0 * 0 0 0 0 F 2 * * So after looking at LVL0 we get: * * LVL5 LVL4 LVL3 LVL2 LVL1 * 0 0 0 1 0 * * So no propagation from LVL1 to LVL2 because that happened * with the add already, but then we need to propagate further * from LVL2 to LVL3. * * So the simple check whether the lower bits of the current * level are 0 or not is sufficient for all cases. */ adj = lvl_clk ? 1 : 0; clk >>= LVL_CLK_SHIFT; clk += adj; } WRITE_ONCE(base->next_expiry, next); base->next_expiry_recalc = false; base->timers_pending = !(next == base->clk + NEXT_TIMER_MAX_DELTA); } #ifdef CONFIG_NO_HZ_COMMON /* * Check, if the next hrtimer event is before the next timer wheel * event: */ static u64 cmp_next_hrtimer_event(u64 basem, u64 expires) { u64 nextevt = hrtimer_get_next_event(); /* * If high resolution timers are enabled * hrtimer_get_next_event() returns KTIME_MAX. */ if (expires <= nextevt) return expires; /* * If the next timer is already expired, return the tick base * time so the tick is fired immediately. */ if (nextevt <= basem) return basem; /* * Round up to the next jiffy. High resolution timers are * off, so the hrtimers are expired in the tick and we need to * make sure that this tick really expires the timer to avoid * a ping pong of the nohz stop code. * * Use DIV_ROUND_UP_ULL to prevent gcc calling __divdi3 */ return DIV_ROUND_UP_ULL(nextevt, TICK_NSEC) * TICK_NSEC; } static unsigned long next_timer_interrupt(struct timer_base *base, unsigned long basej) { if (base->next_expiry_recalc) timer_recalc_next_expiry(base); /* * Move next_expiry for the empty base into the future to prevent an * unnecessary raise of the timer softirq when the next_expiry value * will be reached even if there is no timer pending. * * This update is also required to make timer_base::next_expiry values * easy comparable to find out which base holds the first pending timer. */ if (!base->timers_pending) WRITE_ONCE(base->next_expiry, basej + NEXT_TIMER_MAX_DELTA); return base->next_expiry; } static unsigned long fetch_next_timer_interrupt(unsigned long basej, u64 basem, struct timer_base *base_local, struct timer_base *base_global, struct timer_events *tevt) { unsigned long nextevt, nextevt_local, nextevt_global; bool local_first; nextevt_local = next_timer_interrupt(base_local, basej); nextevt_global = next_timer_interrupt(base_global, basej); local_first = time_before_eq(nextevt_local, nextevt_global); nextevt = local_first ? nextevt_local : nextevt_global; /* * If the @nextevt is at max. one tick away, use @nextevt and store * it in the local expiry value. The next global event is irrelevant in * this case and can be left as KTIME_MAX. */ if (time_before_eq(nextevt, basej + 1)) { /* If we missed a tick already, force 0 delta */ if (time_before(nextevt, basej)) nextevt = basej; tevt->local = basem + (u64)(nextevt - basej) * TICK_NSEC; /* * This is required for the remote check only but it doesn't * hurt, when it is done for both call sites: * * * The remote callers will only take care of the global timers * as local timers will be handled by CPU itself. When not * updating tevt->global with the already missed first global * timer, it is possible that it will be missed completely. * * * The local callers will ignore the tevt->global anyway, when * nextevt is max. one tick away. */ if (!local_first) tevt->global = tevt->local; return nextevt; } /* * Update tevt.* values: * * If the local queue expires first, then the global event can be * ignored. If the global queue is empty, nothing to do either. */ if (!local_first && base_global->timers_pending) tevt->global = basem + (u64)(nextevt_global - basej) * TICK_NSEC; if (base_local->timers_pending) tevt->local = basem + (u64)(nextevt_local - basej) * TICK_NSEC; return nextevt; } # ifdef CONFIG_SMP /** * fetch_next_timer_interrupt_remote() - Store next timers into @tevt * @basej: base time jiffies * @basem: base time clock monotonic * @tevt: Pointer to the storage for the expiry values * @cpu: Remote CPU * * Stores the next pending local and global timer expiry values in the * struct pointed to by @tevt. If a queue is empty the corresponding * field is set to KTIME_MAX. If local event expires before global * event, global event is set to KTIME_MAX as well. * * Caller needs to make sure timer base locks are held (use * timer_lock_remote_bases() for this purpose). */ void fetch_next_timer_interrupt_remote(unsigned long basej, u64 basem, struct timer_events *tevt, unsigned int cpu) { struct timer_base *base_local, *base_global; /* Preset local / global events */ tevt->local = tevt->global = KTIME_MAX; base_local = per_cpu_ptr(&timer_bases[BASE_LOCAL], cpu); base_global = per_cpu_ptr(&timer_bases[BASE_GLOBAL], cpu); lockdep_assert_held(&base_local->lock); lockdep_assert_held(&base_global->lock); fetch_next_timer_interrupt(basej, basem, base_local, base_global, tevt); } /** * timer_unlock_remote_bases - unlock timer bases of cpu * @cpu: Remote CPU * * Unlocks the remote timer bases. */ void timer_unlock_remote_bases(unsigned int cpu) __releases(timer_bases[BASE_LOCAL]->lock) __releases(timer_bases[BASE_GLOBAL]->lock) { struct timer_base *base_local, *base_global; base_local = per_cpu_ptr(&timer_bases[BASE_LOCAL], cpu); base_global = per_cpu_ptr(&timer_bases[BASE_GLOBAL], cpu); raw_spin_unlock(&base_global->lock); raw_spin_unlock(&base_local->lock); } /** * timer_lock_remote_bases - lock timer bases of cpu * @cpu: Remote CPU * * Locks the remote timer bases. */ void timer_lock_remote_bases(unsigned int cpu) __acquires(timer_bases[BASE_LOCAL]->lock) __acquires(timer_bases[BASE_GLOBAL]->lock) { struct timer_base *base_local, *base_global; base_local = per_cpu_ptr(&timer_bases[BASE_LOCAL], cpu); base_global = per_cpu_ptr(&timer_bases[BASE_GLOBAL], cpu); lockdep_assert_irqs_disabled(); raw_spin_lock(&base_local->lock); raw_spin_lock_nested(&base_global->lock, SINGLE_DEPTH_NESTING); } /** * timer_base_is_idle() - Return whether timer base is set idle * * Returns value of local timer base is_idle value. */ bool timer_base_is_idle(void) { return __this_cpu_read(timer_bases[BASE_LOCAL].is_idle); } static void __run_timer_base(struct timer_base *base); /** * timer_expire_remote() - expire global timers of cpu * @cpu: Remote CPU * * Expire timers of global base of remote CPU. */ void timer_expire_remote(unsigned int cpu) { struct timer_base *base = per_cpu_ptr(&timer_bases[BASE_GLOBAL], cpu); __run_timer_base(base); } static void timer_use_tmigr(unsigned long basej, u64 basem, unsigned long *nextevt, bool *tick_stop_path, bool timer_base_idle, struct timer_events *tevt) { u64 next_tmigr; if (timer_base_idle) next_tmigr = tmigr_cpu_new_timer(tevt->global); else if (tick_stop_path) next_tmigr = tmigr_cpu_deactivate(tevt->global); else next_tmigr = tmigr_quick_check(tevt->global); /* * If the CPU is the last going idle in timer migration hierarchy, make * sure the CPU will wake up in time to handle remote timers. * next_tmigr == KTIME_MAX if other CPUs are still active. */ if (next_tmigr < tevt->local) { u64 tmp; /* If we missed a tick already, force 0 delta */ if (next_tmigr < basem) next_tmigr = basem; tmp = div_u64(next_tmigr - basem, TICK_NSEC); *nextevt = basej + (unsigned long)tmp; tevt->local = next_tmigr; } } # else static void timer_use_tmigr(unsigned long basej, u64 basem, unsigned long *nextevt, bool *tick_stop_path, bool timer_base_idle, struct timer_events *tevt) { /* * Make sure first event is written into tevt->local to not miss a * timer on !SMP systems. */ tevt->local = min_t(u64, tevt->local, tevt->global); } # endif /* CONFIG_SMP */ static inline u64 __get_next_timer_interrupt(unsigned long basej, u64 basem, bool *idle) { struct timer_events tevt = { .local = KTIME_MAX, .global = KTIME_MAX }; struct timer_base *base_local, *base_global; unsigned long nextevt; bool idle_is_possible; /* * When the CPU is offline, the tick is cancelled and nothing is supposed * to try to stop it. */ if (WARN_ON_ONCE(cpu_is_offline(smp_processor_id()))) { if (idle) *idle = true; return tevt.local; } base_local = this_cpu_ptr(&timer_bases[BASE_LOCAL]); base_global = this_cpu_ptr(&timer_bases[BASE_GLOBAL]); raw_spin_lock(&base_local->lock); raw_spin_lock_nested(&base_global->lock, SINGLE_DEPTH_NESTING); nextevt = fetch_next_timer_interrupt(basej, basem, base_local, base_global, &tevt); /* * If the next event is only one jiffy ahead there is no need to call * timer migration hierarchy related functions. The value for the next * global timer in @tevt struct equals then KTIME_MAX. This is also * true, when the timer base is idle. * * The proper timer migration hierarchy function depends on the callsite * and whether timer base is idle or not. @nextevt will be updated when * this CPU needs to handle the first timer migration hierarchy * event. See timer_use_tmigr() for detailed information. */ idle_is_possible = time_after(nextevt, basej + 1); if (idle_is_possible) timer_use_tmigr(basej, basem, &nextevt, idle, base_local->is_idle, &tevt); /* * We have a fresh next event. Check whether we can forward the * base. */ __forward_timer_base(base_local, basej); __forward_timer_base(base_global, basej); /* * Set base->is_idle only when caller is timer_base_try_to_set_idle() */ if (idle) { /* * Bases are idle if the next event is more than a tick * away. Caution: @nextevt could have changed by enqueueing a * global timer into timer migration hierarchy. Therefore a new * check is required here. * * If the base is marked idle then any timer add operation must * forward the base clk itself to keep granularity small. This * idle logic is only maintained for the BASE_LOCAL and * BASE_GLOBAL base, deferrable timers may still see large * granularity skew (by design). */ if (!base_local->is_idle && time_after(nextevt, basej + 1)) { base_local->is_idle = true; /* * Global timers queued locally while running in a task * in nohz_full mode need a self-IPI to kick reprogramming * in IRQ tail. */ if (tick_nohz_full_cpu(base_local->cpu)) base_global->is_idle = true; trace_timer_base_idle(true, base_local->cpu); } *idle = base_local->is_idle; /* * When timer base is not set idle, undo the effect of * tmigr_cpu_deactivate() to prevent inconsistent states - active * timer base but inactive timer migration hierarchy. * * When timer base was already marked idle, nothing will be * changed here. */ if (!base_local->is_idle && idle_is_possible) tmigr_cpu_activate(); } raw_spin_unlock(&base_global->lock); raw_spin_unlock(&base_local->lock); return cmp_next_hrtimer_event(basem, tevt.local); } /** * get_next_timer_interrupt() - return the time (clock mono) of the next timer * @basej: base time jiffies * @basem: base time clock monotonic * * Returns the tick aligned clock monotonic time of the next pending timer or * KTIME_MAX if no timer is pending. If timer of global base was queued into * timer migration hierarchy, first global timer is not taken into account. If * it was the last CPU of timer migration hierarchy going idle, first global * event is taken into account. */ u64 get_next_timer_interrupt(unsigned long basej, u64 basem) { return __get_next_timer_interrupt(basej, basem, NULL); } /** * timer_base_try_to_set_idle() - Try to set the idle state of the timer bases * @basej: base time jiffies * @basem: base time clock monotonic * @idle: pointer to store the value of timer_base->is_idle on return; * *idle contains the information whether tick was already stopped * * Returns the tick aligned clock monotonic time of the next pending timer or * KTIME_MAX if no timer is pending. When tick was already stopped KTIME_MAX is * returned as well. */ u64 timer_base_try_to_set_idle(unsigned long basej, u64 basem, bool *idle) { if (*idle) return KTIME_MAX; return __get_next_timer_interrupt(basej, basem, idle); } /** * timer_clear_idle - Clear the idle state of the timer base * * Called with interrupts disabled */ void timer_clear_idle(void) { /* * We do this unlocked. The worst outcome is a remote pinned timer * enqueue sending a pointless IPI, but taking the lock would just * make the window for sending the IPI a few instructions smaller * for the cost of taking the lock in the exit from idle * path. Required for BASE_LOCAL only. */ __this_cpu_write(timer_bases[BASE_LOCAL].is_idle, false); if (tick_nohz_full_cpu(smp_processor_id())) __this_cpu_write(timer_bases[BASE_GLOBAL].is_idle, false); trace_timer_base_idle(false, smp_processor_id()); /* Activate without holding the timer_base->lock */ tmigr_cpu_activate(); } #endif /** * __run_timers - run all expired timers (if any) on this CPU. * @base: the timer vector to be processed. */ static inline void __run_timers(struct timer_base *base) { struct hlist_head heads[LVL_DEPTH]; int levels; lockdep_assert_held(&base->lock); if (base->running_timer) return; while (time_after_eq(jiffies, base->clk) && time_after_eq(jiffies, base->next_expiry)) { levels = collect_expired_timers(base, heads); /* * The two possible reasons for not finding any expired * timer at this clk are that all matching timers have been * dequeued or no timer has been queued since * base::next_expiry was set to base::clk + * NEXT_TIMER_MAX_DELTA. */ WARN_ON_ONCE(!levels && !base->next_expiry_recalc && base->timers_pending); /* * While executing timers, base->clk is set 1 offset ahead of * jiffies to avoid endless requeuing to current jiffies. */ base->clk++; timer_recalc_next_expiry(base); while (levels--) expire_timers(base, heads + levels); } } static void __run_timer_base(struct timer_base *base) { /* Can race against a remote CPU updating next_expiry under the lock */ if (time_before(jiffies, READ_ONCE(base->next_expiry))) return; timer_base_lock_expiry(base); raw_spin_lock_irq(&base->lock); __run_timers(base); raw_spin_unlock_irq(&base->lock); timer_base_unlock_expiry(base); } static void run_timer_base(int index) { struct timer_base *base = this_cpu_ptr(&timer_bases[index]); __run_timer_base(base); } /* * This function runs timers and the timer-tq in bottom half context. */ static __latent_entropy void run_timer_softirq(void) { run_timer_base(BASE_LOCAL); if (IS_ENABLED(CONFIG_NO_HZ_COMMON)) { run_timer_base(BASE_GLOBAL); run_timer_base(BASE_DEF); if (is_timers_nohz_active()) tmigr_handle_remote(); } } /* * Called by the local, per-CPU timer interrupt on SMP. */ static void run_local_timers(void) { struct timer_base *base = this_cpu_ptr(&timer_bases[BASE_LOCAL]); hrtimer_run_queues(); for (int i = 0; i < NR_BASES; i++, base++) { /* * Raise the softirq only if required. * * timer_base::next_expiry can be written by a remote CPU while * holding the lock. If this write happens at the same time than * the lockless local read, sanity checker could complain about * data corruption. * * There are two possible situations where * timer_base::next_expiry is written by a remote CPU: * * 1. Remote CPU expires global timers of this CPU and updates * timer_base::next_expiry of BASE_GLOBAL afterwards in * next_timer_interrupt() or timer_recalc_next_expiry(). The * worst outcome is a superfluous raise of the timer softirq * when the not yet updated value is read. * * 2. A new first pinned timer is enqueued by a remote CPU * and therefore timer_base::next_expiry of BASE_LOCAL is * updated. When this update is missed, this isn't a * problem, as an IPI is executed nevertheless when the CPU * was idle before. When the CPU wasn't idle but the update * is missed, then the timer would expire one jiffy late - * bad luck. * * Those unlikely corner cases where the worst outcome is only a * one jiffy delay or a superfluous raise of the softirq are * not that expensive as doing the check always while holding * the lock. * * Possible remote writers are using WRITE_ONCE(). Local reader * uses therefore READ_ONCE(). */ if (time_after_eq(jiffies, READ_ONCE(base->next_expiry)) || (i == BASE_DEF && tmigr_requires_handle_remote())) { raise_timer_softirq(TIMER_SOFTIRQ); return; } } } /* * Called from the timer interrupt handler to charge one tick to the current * process. user_tick is 1 if the tick is user time, 0 for system. */ void update_process_times(int user_tick) { struct task_struct *p = current; /* Note: this timer irq context must be accounted for as well. */ account_process_tick(p, user_tick); run_local_timers(); rcu_sched_clock_irq(user_tick); #ifdef CONFIG_IRQ_WORK if (in_irq()) irq_work_tick(); #endif sched_tick(); if (IS_ENABLED(CONFIG_POSIX_TIMERS)) run_posix_cpu_timers(); } #ifdef CONFIG_HOTPLUG_CPU static void migrate_timer_list(struct timer_base *new_base, struct hlist_head *head) { struct timer_list *timer; int cpu = new_base->cpu; while (!hlist_empty(head)) { timer = hlist_entry(head->first, struct timer_list, entry); detach_timer(timer, false); timer->flags = (timer->flags & ~TIMER_BASEMASK) | cpu; internal_add_timer(new_base, timer); } } int timers_prepare_cpu(unsigned int cpu) { struct timer_base *base; int b; for (b = 0; b < NR_BASES; b++) { base = per_cpu_ptr(&timer_bases[b], cpu); base->clk = jiffies; base->next_expiry = base->clk + NEXT_TIMER_MAX_DELTA; base->next_expiry_recalc = false; base->timers_pending = false; base->is_idle = false; } return 0; } int timers_dead_cpu(unsigned int cpu) { struct timer_base *old_base; struct timer_base *new_base; int b, i; for (b = 0; b < NR_BASES; b++) { old_base = per_cpu_ptr(&timer_bases[b], cpu); new_base = get_cpu_ptr(&timer_bases[b]); /* * The caller is globally serialized and nobody else * takes two locks at once, deadlock is not possible. */ raw_spin_lock_irq(&new_base->lock); raw_spin_lock_nested(&old_base->lock, SINGLE_DEPTH_NESTING); /* * The current CPUs base clock might be stale. Update it * before moving the timers over. */ forward_timer_base(new_base); WARN_ON_ONCE(old_base->running_timer); old_base->running_timer = NULL; for (i = 0; i < WHEEL_SIZE; i++) migrate_timer_list(new_base, old_base->vectors + i); raw_spin_unlock(&old_base->lock); raw_spin_unlock_irq(&new_base->lock); put_cpu_ptr(&timer_bases); } return 0; } #endif /* CONFIG_HOTPLUG_CPU */ static void __init init_timer_cpu(int cpu) { struct timer_base *base; int i; for (i = 0; i < NR_BASES; i++) { base = per_cpu_ptr(&timer_bases[i], cpu); base->cpu = cpu; raw_spin_lock_init(&base->lock); base->clk = jiffies; base->next_expiry = base->clk + NEXT_TIMER_MAX_DELTA; timer_base_init_expiry_lock(base); } } static void __init init_timer_cpus(void) { int cpu; for_each_possible_cpu(cpu) init_timer_cpu(cpu); } void __init init_timers(void) { init_timer_cpus(); posix_cputimers_init_work(); open_softirq(TIMER_SOFTIRQ, run_timer_softirq); } |
65 65 23 22 23 23 1 41 40 30 1 41 24 24 24 314 314 314 314 313 313 313 314 313 313 24 24 24 24 30 30 30 30 30 30 29 32 32 32 32 1 40 40 22 22 29 22 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 | // SPDX-License-Identifier: GPL-2.0 /* * Copyright (c) 2006-2007 Silicon Graphics, Inc. * All Rights Reserved. */ #include "xfs.h" #include "xfs_mru_cache.h" /* * The MRU Cache data structure consists of a data store, an array of lists and * a lock to protect its internal state. At initialisation time, the client * supplies an element lifetime in milliseconds and a group count, as well as a * function pointer to call when deleting elements. A data structure for * queueing up work in the form of timed callbacks is also included. * * The group count controls how many lists are created, and thereby how finely * the elements are grouped in time. When reaping occurs, all the elements in * all the lists whose time has expired are deleted. * * To give an example of how this works in practice, consider a client that * initialises an MRU Cache with a lifetime of ten seconds and a group count of * five. Five internal lists will be created, each representing a two second * period in time. When the first element is added, time zero for the data * structure is initialised to the current time. * * All the elements added in the first two seconds are appended to the first * list. Elements added in the third second go into the second list, and so on. * If an element is accessed at any point, it is removed from its list and * inserted at the head of the current most-recently-used list. * * The reaper function will have nothing to do until at least twelve seconds * have elapsed since the first element was added. The reason for this is that * if it were called at t=11s, there could be elements in the first list that * have only been inactive for nine seconds, so it still does nothing. If it is * called anywhere between t=12 and t=14 seconds, it will delete all the * elements that remain in the first list. It's therefore possible for elements * to remain in the data store even after they've been inactive for up to * (t + t/g) seconds, where t is the inactive element lifetime and g is the * number of groups. * * The above example assumes that the reaper function gets called at least once * every (t/g) seconds. If it is called less frequently, unused elements will * accumulate in the reap list until the reaper function is eventually called. * The current implementation uses work queue callbacks to carefully time the * reaper function calls, so this should happen rarely, if at all. * * From a design perspective, the primary reason for the choice of a list array * representing discrete time intervals is that it's only practical to reap * expired elements in groups of some appreciable size. This automatically * introduces a granularity to element lifetimes, so there's no point storing an * individual timeout with each element that specifies a more precise reap time. * The bonus is a saving of sizeof(long) bytes of memory per element stored. * * The elements could have been stored in just one list, but an array of * counters or pointers would need to be maintained to allow them to be divided * up into discrete time groups. More critically, the process of touching or * removing an element would involve walking large portions of the entire list, * which would have a detrimental effect on performance. The additional memory * requirement for the array of list heads is minimal. * * When an element is touched or deleted, it needs to be removed from its * current list. Doubly linked lists are used to make the list maintenance * portion of these operations O(1). Since reaper timing can be imprecise, * inserts and lookups can occur when there are no free lists available. When * this happens, all the elements on the LRU list need to be migrated to the end * of the reap list. To keep the list maintenance portion of these operations * O(1) also, list tails need to be accessible without walking the entire list. * This is the reason why doubly linked list heads are used. */ /* * An MRU Cache is a dynamic data structure that stores its elements in a way * that allows efficient lookups, but also groups them into discrete time * intervals based on insertion time. This allows elements to be efficiently * and automatically reaped after a fixed period of inactivity. * * When a client data pointer is stored in the MRU Cache it needs to be added to * both the data store and to one of the lists. It must also be possible to * access each of these entries via the other, i.e. to: * * a) Walk a list, removing the corresponding data store entry for each item. * b) Look up a data store entry, then access its list entry directly. * * To achieve both of these goals, each entry must contain both a list entry and * a key, in addition to the user's data pointer. Note that it's not a good * idea to have the client embed one of these structures at the top of their own * data structure, because inserting the same item more than once would most * likely result in a loop in one of the lists. That's a sure-fire recipe for * an infinite loop in the code. */ struct xfs_mru_cache { struct radix_tree_root store; /* Core storage data structure. */ struct list_head *lists; /* Array of lists, one per grp. */ struct list_head reap_list; /* Elements overdue for reaping. */ spinlock_t lock; /* Lock to protect this struct. */ unsigned int grp_count; /* Number of discrete groups. */ unsigned int grp_time; /* Time period spanned by grps. */ unsigned int lru_grp; /* Group containing time zero. */ unsigned long time_zero; /* Time first element was added. */ xfs_mru_cache_free_func_t free_func; /* Function pointer for freeing. */ struct delayed_work work; /* Workqueue data for reaping. */ unsigned int queued; /* work has been queued */ void *data; }; static struct workqueue_struct *xfs_mru_reap_wq; /* * When inserting, destroying or reaping, it's first necessary to update the * lists relative to a particular time. In the case of destroying, that time * will be well in the future to ensure that all items are moved to the reap * list. In all other cases though, the time will be the current time. * * This function enters a loop, moving the contents of the LRU list to the reap * list again and again until either a) the lists are all empty, or b) time zero * has been advanced sufficiently to be within the immediate element lifetime. * * Case a) above is detected by counting how many groups are migrated and * stopping when they've all been moved. Case b) is detected by monitoring the * time_zero field, which is updated as each group is migrated. * * The return value is the earliest time that more migration could be needed, or * zero if there's no need to schedule more work because the lists are empty. */ STATIC unsigned long _xfs_mru_cache_migrate( struct xfs_mru_cache *mru, unsigned long now) { unsigned int grp; unsigned int migrated = 0; struct list_head *lru_list; /* Nothing to do if the data store is empty. */ if (!mru->time_zero) return 0; /* While time zero is older than the time spanned by all the lists. */ while (mru->time_zero <= now - mru->grp_count * mru->grp_time) { /* * If the LRU list isn't empty, migrate its elements to the tail * of the reap list. */ lru_list = mru->lists + mru->lru_grp; if (!list_empty(lru_list)) list_splice_init(lru_list, mru->reap_list.prev); /* * Advance the LRU group number, freeing the old LRU list to * become the new MRU list; advance time zero accordingly. */ mru->lru_grp = (mru->lru_grp + 1) % mru->grp_count; mru->time_zero += mru->grp_time; /* * If reaping is so far behind that all the elements on all the * lists have been migrated to the reap list, it's now empty. */ if (++migrated == mru->grp_count) { mru->lru_grp = 0; mru->time_zero = 0; return 0; } } /* Find the first non-empty list from the LRU end. */ for (grp = 0; grp < mru->grp_count; grp++) { /* Check the grp'th list from the LRU end. */ lru_list = mru->lists + ((mru->lru_grp + grp) % mru->grp_count); if (!list_empty(lru_list)) return mru->time_zero + (mru->grp_count + grp) * mru->grp_time; } /* All the lists must be empty. */ mru->lru_grp = 0; mru->time_zero = 0; return 0; } /* * When inserting or doing a lookup, an element needs to be inserted into the * MRU list. The lists must be migrated first to ensure that they're * up-to-date, otherwise the new element could be given a shorter lifetime in * the cache than it should. */ STATIC void _xfs_mru_cache_list_insert( struct xfs_mru_cache *mru, struct xfs_mru_cache_elem *elem) { unsigned int grp = 0; unsigned long now = jiffies; /* * If the data store is empty, initialise time zero, leave grp set to * zero and start the work queue timer if necessary. Otherwise, set grp * to the number of group times that have elapsed since time zero. */ if (!_xfs_mru_cache_migrate(mru, now)) { mru->time_zero = now; if (!mru->queued) { mru->queued = 1; queue_delayed_work(xfs_mru_reap_wq, &mru->work, mru->grp_count * mru->grp_time); } } else { grp = (now - mru->time_zero) / mru->grp_time; grp = (mru->lru_grp + grp) % mru->grp_count; } /* Insert the element at the tail of the corresponding list. */ list_add_tail(&elem->list_node, mru->lists + grp); } /* * When destroying or reaping, all the elements that were migrated to the reap * list need to be deleted. For each element this involves removing it from the * data store, removing it from the reap list, calling the client's free * function and deleting the element from the element cache. * * We get called holding the mru->lock, which we drop and then reacquire. * Sparse need special help with this to tell it we know what we are doing. */ STATIC void _xfs_mru_cache_clear_reap_list( struct xfs_mru_cache *mru) __releases(mru->lock) __acquires(mru->lock) { struct xfs_mru_cache_elem *elem, *next; LIST_HEAD(tmp); list_for_each_entry_safe(elem, next, &mru->reap_list, list_node) { /* Remove the element from the data store. */ radix_tree_delete(&mru->store, elem->key); /* * remove to temp list so it can be freed without * needing to hold the lock */ list_move(&elem->list_node, &tmp); } spin_unlock(&mru->lock); list_for_each_entry_safe(elem, next, &tmp, list_node) { list_del_init(&elem->list_node); mru->free_func(mru->data, elem); } spin_lock(&mru->lock); } /* * We fire the reap timer every group expiry interval so * we always have a reaper ready to run. This makes shutdown * and flushing of the reaper easy to do. Hence we need to * keep when the next reap must occur so we can determine * at each interval whether there is anything we need to do. */ STATIC void _xfs_mru_cache_reap( struct work_struct *work) { struct xfs_mru_cache *mru = container_of(work, struct xfs_mru_cache, work.work); unsigned long now, next; ASSERT(mru && mru->lists); if (!mru || !mru->lists) return; spin_lock(&mru->lock); next = _xfs_mru_cache_migrate(mru, jiffies); _xfs_mru_cache_clear_reap_list(mru); mru->queued = next; if ((mru->queued > 0)) { now = jiffies; if (next <= now) next = 0; else next -= now; queue_delayed_work(xfs_mru_reap_wq, &mru->work, next); } spin_unlock(&mru->lock); } int xfs_mru_cache_init(void) { xfs_mru_reap_wq = alloc_workqueue("xfs_mru_cache", XFS_WQFLAGS(WQ_MEM_RECLAIM | WQ_FREEZABLE), 1); if (!xfs_mru_reap_wq) return -ENOMEM; return 0; } void xfs_mru_cache_uninit(void) { destroy_workqueue(xfs_mru_reap_wq); } /* * To initialise a struct xfs_mru_cache pointer, call xfs_mru_cache_create() * with the address of the pointer, a lifetime value in milliseconds, a group * count and a free function to use when deleting elements. This function * returns 0 if the initialisation was successful. */ int xfs_mru_cache_create( struct xfs_mru_cache **mrup, void *data, unsigned int lifetime_ms, unsigned int grp_count, xfs_mru_cache_free_func_t free_func) { struct xfs_mru_cache *mru = NULL; int err = 0, grp; unsigned int grp_time; if (mrup) *mrup = NULL; if (!mrup || !grp_count || !lifetime_ms || !free_func) return -EINVAL; if (!(grp_time = msecs_to_jiffies(lifetime_ms) / grp_count)) return -EINVAL; mru = kzalloc(sizeof(*mru), GFP_KERNEL | __GFP_NOFAIL); if (!mru) return -ENOMEM; /* An extra list is needed to avoid reaping up to a grp_time early. */ mru->grp_count = grp_count + 1; mru->lists = kzalloc(mru->grp_count * sizeof(*mru->lists), GFP_KERNEL | __GFP_NOFAIL); if (!mru->lists) { err = -ENOMEM; goto exit; } for (grp = 0; grp < mru->grp_count; grp++) INIT_LIST_HEAD(mru->lists + grp); /* * We use GFP_KERNEL radix tree preload and do inserts under a * spinlock so GFP_ATOMIC is appropriate for the radix tree itself. */ INIT_RADIX_TREE(&mru->store, GFP_ATOMIC); INIT_LIST_HEAD(&mru->reap_list); spin_lock_init(&mru->lock); INIT_DELAYED_WORK(&mru->work, _xfs_mru_cache_reap); mru->grp_time = grp_time; mru->free_func = free_func; mru->data = data; *mrup = mru; exit: if (err && mru && mru->lists) kfree(mru->lists); if (err && mru) kfree(mru); return err; } /* * Call xfs_mru_cache_flush() to flush out all cached entries, calling their * free functions as they're deleted. When this function returns, the caller is * guaranteed that all the free functions for all the elements have finished * executing and the reaper is not running. */ static void xfs_mru_cache_flush( struct xfs_mru_cache *mru) { if (!mru || !mru->lists) return; spin_lock(&mru->lock); if (mru->queued) { spin_unlock(&mru->lock); cancel_delayed_work_sync(&mru->work); spin_lock(&mru->lock); } _xfs_mru_cache_migrate(mru, jiffies + mru->grp_count * mru->grp_time); _xfs_mru_cache_clear_reap_list(mru); spin_unlock(&mru->lock); } void xfs_mru_cache_destroy( struct xfs_mru_cache *mru) { if (!mru || !mru->lists) return; xfs_mru_cache_flush(mru); kfree(mru->lists); kfree(mru); } /* * To insert an element, call xfs_mru_cache_insert() with the data store, the * element's key and the client data pointer. This function returns 0 on * success or ENOMEM if memory for the data element couldn't be allocated. */ int xfs_mru_cache_insert( struct xfs_mru_cache *mru, unsigned long key, struct xfs_mru_cache_elem *elem) { int error; ASSERT(mru && mru->lists); if (!mru || !mru->lists) return -EINVAL; if (radix_tree_preload(GFP_KERNEL)) return -ENOMEM; INIT_LIST_HEAD(&elem->list_node); elem->key = key; spin_lock(&mru->lock); error = radix_tree_insert(&mru->store, key, elem); radix_tree_preload_end(); if (!error) _xfs_mru_cache_list_insert(mru, elem); spin_unlock(&mru->lock); return error; } /* * To remove an element without calling the free function, call * xfs_mru_cache_remove() with the data store and the element's key. On success * the client data pointer for the removed element is returned, otherwise this * function will return a NULL pointer. */ struct xfs_mru_cache_elem * xfs_mru_cache_remove( struct xfs_mru_cache *mru, unsigned long key) { struct xfs_mru_cache_elem *elem; ASSERT(mru && mru->lists); if (!mru || !mru->lists) return NULL; spin_lock(&mru->lock); elem = radix_tree_delete(&mru->store, key); if (elem) list_del(&elem->list_node); spin_unlock(&mru->lock); return elem; } /* * To remove and element and call the free function, call xfs_mru_cache_delete() * with the data store and the element's key. */ void xfs_mru_cache_delete( struct xfs_mru_cache *mru, unsigned long key) { struct xfs_mru_cache_elem *elem; elem = xfs_mru_cache_remove(mru, key); if (elem) mru->free_func(mru->data, elem); } /* * To look up an element using its key, call xfs_mru_cache_lookup() with the * data store and the element's key. If found, the element will be moved to the * head of the MRU list to indicate that it's been touched. * * The internal data structures are protected by a spinlock that is STILL HELD * when this function returns. Call xfs_mru_cache_done() to release it. Note * that it is not safe to call any function that might sleep in the interim. * * The implementation could have used reference counting to avoid this * restriction, but since most clients simply want to get, set or test a member * of the returned data structure, the extra per-element memory isn't warranted. * * If the element isn't found, this function returns NULL and the spinlock is * released. xfs_mru_cache_done() should NOT be called when this occurs. * * Because sparse isn't smart enough to know about conditional lock return * status, we need to help it get it right by annotating the path that does * not release the lock. */ struct xfs_mru_cache_elem * xfs_mru_cache_lookup( struct xfs_mru_cache *mru, unsigned long key) { struct xfs_mru_cache_elem *elem; ASSERT(mru && mru->lists); if (!mru || !mru->lists) return NULL; spin_lock(&mru->lock); elem = radix_tree_lookup(&mru->store, key); if (elem) { list_del(&elem->list_node); _xfs_mru_cache_list_insert(mru, elem); __release(mru_lock); /* help sparse not be stupid */ } else spin_unlock(&mru->lock); return elem; } /* * To release the internal data structure spinlock after having performed an * xfs_mru_cache_lookup() or an xfs_mru_cache_peek(), call xfs_mru_cache_done() * with the data store pointer. */ void xfs_mru_cache_done( struct xfs_mru_cache *mru) __releases(mru->lock) { spin_unlock(&mru->lock); } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 | /* SPDX-License-Identifier: GPL-2.0-only */ /* * linux/include/linux/clk.h * * Copyright (C) 2004 ARM Limited. * Written by Deep Blue Solutions Limited. * Copyright (C) 2011-2012 Linaro Ltd <mturquette@linaro.org> */ #ifndef __LINUX_CLK_H #define __LINUX_CLK_H #include <linux/err.h> #include <linux/kernel.h> #include <linux/notifier.h> struct device; struct clk; struct device_node; struct of_phandle_args; /** * DOC: clk notifier callback types * * PRE_RATE_CHANGE - called immediately before the clk rate is changed, * to indicate that the rate change will proceed. Drivers must * immediately terminate any operations that will be affected by the * rate change. Callbacks may either return NOTIFY_DONE, NOTIFY_OK, * NOTIFY_STOP or NOTIFY_BAD. * * ABORT_RATE_CHANGE: called if the rate change failed for some reason * after PRE_RATE_CHANGE. In this case, all registered notifiers on * the clk will be called with ABORT_RATE_CHANGE. Callbacks must * always return NOTIFY_DONE or NOTIFY_OK. * * POST_RATE_CHANGE - called after the clk rate change has successfully * completed. Callbacks must always return NOTIFY_DONE or NOTIFY_OK. * */ #define PRE_RATE_CHANGE BIT(0) #define POST_RATE_CHANGE BIT(1) #define ABORT_RATE_CHANGE BIT(2) /** * struct clk_notifier - associate a clk with a notifier * @clk: struct clk * to associate the notifier with * @notifier_head: a blocking_notifier_head for this clk * @node: linked list pointers * * A list of struct clk_notifier is maintained by the notifier code. * An entry is created whenever code registers the first notifier on a * particular @clk. Future notifiers on that @clk are added to the * @notifier_head. */ struct clk_notifier { struct clk *clk; struct srcu_notifier_head notifier_head; struct list_head node; }; /** * struct clk_notifier_data - rate data to pass to the notifier callback * @clk: struct clk * being changed * @old_rate: previous rate of this clk * @new_rate: new rate of this clk * * For a pre-notifier, old_rate is the clk's rate before this rate * change, and new_rate is what the rate will be in the future. For a * post-notifier, old_rate and new_rate are both set to the clk's * current rate (this was done to optimize the implementation). */ struct clk_notifier_data { struct clk *clk; unsigned long old_rate; unsigned long new_rate; }; /** * struct clk_bulk_data - Data used for bulk clk operations. * * @id: clock consumer ID * @clk: struct clk * to store the associated clock * * The CLK APIs provide a series of clk_bulk_() API calls as * a convenience to consumers which require multiple clks. This * structure is used to manage data for these calls. */ struct clk_bulk_data { const char *id; struct clk *clk; }; #ifdef CONFIG_COMMON_CLK /** * clk_notifier_register - register a clock rate-change notifier callback * @clk: clock whose rate we are interested in * @nb: notifier block with callback function pointer * * ProTip: debugging across notifier chains can be frustrating. Make sure that * your notifier callback function prints a nice big warning in case of * failure. */ int clk_notifier_register(struct clk *clk, struct notifier_block *nb); /** * clk_notifier_unregister - unregister a clock rate-change notifier callback * @clk: clock whose rate we are no longer interested in * @nb: notifier block which will be unregistered */ int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb); /** * devm_clk_notifier_register - register a managed rate-change notifier callback * @dev: device for clock "consumer" * @clk: clock whose rate we are interested in * @nb: notifier block with callback function pointer * * Returns 0 on success, -EERROR otherwise */ int devm_clk_notifier_register(struct device *dev, struct clk *clk, struct notifier_block *nb); /** * clk_get_accuracy - obtain the clock accuracy in ppb (parts per billion) * for a clock source. * @clk: clock source * * This gets the clock source accuracy expressed in ppb. * A perfect clock returns 0. */ long clk_get_accuracy(struct clk *clk); /** * clk_set_phase - adjust the phase shift of a clock signal * @clk: clock signal source * @degrees: number of degrees the signal is shifted * * Shifts the phase of a clock signal by the specified degrees. Returns 0 on * success, -EERROR otherwise. */ int clk_set_phase(struct clk *clk, int degrees); /** * clk_get_phase - return the phase shift of a clock signal * @clk: clock signal source * * Returns the phase shift of a clock node in degrees, otherwise returns * -EERROR. */ int clk_get_phase(struct clk *clk); /** * clk_set_duty_cycle - adjust the duty cycle ratio of a clock signal * @clk: clock signal source * @num: numerator of the duty cycle ratio to be applied * @den: denominator of the duty cycle ratio to be applied * * Adjust the duty cycle of a clock signal by the specified ratio. Returns 0 on * success, -EERROR otherwise. */ int clk_set_duty_cycle(struct clk *clk, unsigned int num, unsigned int den); /** * clk_get_scaled_duty_cycle - return the duty cycle ratio of a clock signal * @clk: clock signal source * @scale: scaling factor to be applied to represent the ratio as an integer * * Returns the duty cycle ratio multiplied by the scale provided, otherwise * returns -EERROR. */ int clk_get_scaled_duty_cycle(struct clk *clk, unsigned int scale); /** * clk_is_match - check if two clk's point to the same hardware clock * @p: clk compared against q * @q: clk compared against p * * Returns true if the two struct clk pointers both point to the same hardware * clock node. Put differently, returns true if @p and @q * share the same &struct clk_core object. * * Returns false otherwise. Note that two NULL clks are treated as matching. */ bool clk_is_match(const struct clk *p, const struct clk *q); /** * clk_rate_exclusive_get - get exclusivity over the rate control of a * producer * @clk: clock source * * This function allows drivers to get exclusive control over the rate of a * provider. It prevents any other consumer to execute, even indirectly, * opereation which could alter the rate of the provider or cause glitches * * If exlusivity is claimed more than once on clock, even by the same driver, * the rate effectively gets locked as exclusivity can't be preempted. * * Must not be called from within atomic context. * * Returns success (0) or negative errno. */ int clk_rate_exclusive_get(struct clk *clk); /** * devm_clk_rate_exclusive_get - devm variant of clk_rate_exclusive_get * @dev: device the exclusivity is bound to * @clk: clock source * * Calls clk_rate_exclusive_get() on @clk and registers a devm cleanup handler * on @dev to call clk_rate_exclusive_put(). * * Must not be called from within atomic context. */ int devm_clk_rate_exclusive_get(struct device *dev, struct clk *clk); /** * clk_rate_exclusive_put - release exclusivity over the rate control of a * producer * @clk: clock source * * This function allows drivers to release the exclusivity it previously got * from clk_rate_exclusive_get() * * The caller must balance the number of clk_rate_exclusive_get() and * clk_rate_exclusive_put() calls. * * Must not be called from within atomic context. */ void clk_rate_exclusive_put(struct clk *clk); #else static inline int clk_notifier_register(struct clk *clk, struct notifier_block *nb) { return -ENOTSUPP; } static inline int clk_notifier_unregister(struct clk *clk, struct notifier_block *nb) { return -ENOTSUPP; } static inline int devm_clk_notifier_register(struct device *dev, struct clk *clk, struct notifier_block *nb) { return -ENOTSUPP; } static inline long clk_get_accuracy(struct clk *clk) { return -ENOTSUPP; } static inline long clk_set_phase(struct clk *clk, int phase) { return -ENOTSUPP; } static inline long clk_get_phase(struct clk *clk) { return -ENOTSUPP; } static inline int clk_set_duty_cycle(struct clk *clk, unsigned int num, unsigned int den) { return -ENOTSUPP; } static inline unsigned int clk_get_scaled_duty_cycle(struct clk *clk, unsigned int scale) { return 0; } static inline bool clk_is_match(const struct clk *p, const struct clk *q) { return p == q; } static inline int clk_rate_exclusive_get(struct clk *clk) { return 0; } static inline int devm_clk_rate_exclusive_get(struct device *dev, struct clk *clk) { return 0; } static inline void clk_rate_exclusive_put(struct clk *clk) {} #endif #ifdef CONFIG_HAVE_CLK_PREPARE /** * clk_prepare - prepare a clock source * @clk: clock source * * This prepares the clock source for use. * * Must not be called from within atomic context. */ int clk_prepare(struct clk *clk); int __must_check clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks); /** * clk_is_enabled_when_prepared - indicate if preparing a clock also enables it. * @clk: clock source * * Returns true if clk_prepare() implicitly enables the clock, effectively * making clk_enable()/clk_disable() no-ops, false otherwise. * * This is of interest mainly to the power management code where actually * disabling the clock also requires unpreparing it to have any material * effect. * * Regardless of the value returned here, the caller must always invoke * clk_enable() or clk_prepare_enable() and counterparts for usage counts * to be right. */ bool clk_is_enabled_when_prepared(struct clk *clk); #else static inline int clk_prepare(struct clk *clk) { might_sleep(); return 0; } static inline int __must_check clk_bulk_prepare(int num_clks, const struct clk_bulk_data *clks) { might_sleep(); return 0; } static inline bool clk_is_enabled_when_prepared(struct clk *clk) { return false; } #endif /** * clk_unprepare - undo preparation of a clock source * @clk: clock source * * This undoes a previously prepared clock. The caller must balance * the number of prepare and unprepare calls. * * Must not be called from within atomic context. */ #ifdef CONFIG_HAVE_CLK_PREPARE void clk_unprepare(struct clk *clk); void clk_bulk_unprepare(int num_clks, const struct clk_bulk_data *clks); #else static inline void clk_unprepare(struct clk *clk) { might_sleep(); } static inline void clk_bulk_unprepare(int num_clks, const struct clk_bulk_data *clks) { might_sleep(); } #endif #ifdef CONFIG_HAVE_CLK /** * clk_get - lookup and obtain a reference to a clock producer. * @dev: device for clock "consumer" * @id: clock consumer ID * * Returns a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev and @id to determine the clock consumer, and thereby * the clock producer. (IOW, @id may be identical strings, but * clk_get may return different clock producers depending on @dev.) * * Drivers must assume that the clock source is not enabled. * * clk_get should not be called from within interrupt context. */ struct clk *clk_get(struct device *dev, const char *id); /** * clk_bulk_get - lookup and obtain a number of references to clock producer. * @dev: device for clock "consumer" * @num_clks: the number of clk_bulk_data * @clks: the clk_bulk_data table of consumer * * This helper function allows drivers to get several clk consumers in one * operation. If any of the clk cannot be acquired then any clks * that were obtained will be freed before returning to the caller. * * Returns 0 if all clocks specified in clk_bulk_data table are obtained * successfully, or valid IS_ERR() condition containing errno. * The implementation uses @dev and @clk_bulk_data.id to determine the * clock consumer, and thereby the clock producer. * The clock returned is stored in each @clk_bulk_data.clk field. * * Drivers must assume that the clock source is not enabled. * * clk_bulk_get should not be called from within interrupt context. */ int __must_check clk_bulk_get(struct device *dev, int num_clks, struct clk_bulk_data *clks); /** * clk_bulk_get_all - lookup and obtain all available references to clock * producer. * @dev: device for clock "consumer" * @clks: pointer to the clk_bulk_data table of consumer * * This helper function allows drivers to get all clk consumers in one * operation. If any of the clk cannot be acquired then any clks * that were obtained will be freed before returning to the caller. * * Returns a positive value for the number of clocks obtained while the * clock references are stored in the clk_bulk_data table in @clks field. * Returns 0 if there're none and a negative value if something failed. * * Drivers must assume that the clock source is not enabled. * * clk_bulk_get should not be called from within interrupt context. */ int __must_check clk_bulk_get_all(struct device *dev, struct clk_bulk_data **clks); /** * clk_bulk_get_optional - lookup and obtain a number of references to clock producer * @dev: device for clock "consumer" * @num_clks: the number of clk_bulk_data * @clks: the clk_bulk_data table of consumer * * Behaves the same as clk_bulk_get() except where there is no clock producer. * In this case, instead of returning -ENOENT, the function returns 0 and * NULL for a clk for which a clock producer could not be determined. */ int __must_check clk_bulk_get_optional(struct device *dev, int num_clks, struct clk_bulk_data *clks); /** * devm_clk_bulk_get - managed get multiple clk consumers * @dev: device for clock "consumer" * @num_clks: the number of clk_bulk_data * @clks: the clk_bulk_data table of consumer * * Return 0 on success, an errno on failure. * * This helper function allows drivers to get several clk * consumers in one operation with management, the clks will * automatically be freed when the device is unbound. */ int __must_check devm_clk_bulk_get(struct device *dev, int num_clks, struct clk_bulk_data *clks); /** * devm_clk_bulk_get_optional - managed get multiple optional consumer clocks * @dev: device for clock "consumer" * @num_clks: the number of clk_bulk_data * @clks: pointer to the clk_bulk_data table of consumer * * Behaves the same as devm_clk_bulk_get() except where there is no clock * producer. In this case, instead of returning -ENOENT, the function returns * NULL for given clk. It is assumed all clocks in clk_bulk_data are optional. * * Returns 0 if all clocks specified in clk_bulk_data table are obtained * successfully or for any clk there was no clk provider available, otherwise * returns valid IS_ERR() condition containing errno. * The implementation uses @dev and @clk_bulk_data.id to determine the * clock consumer, and thereby the clock producer. * The clock returned is stored in each @clk_bulk_data.clk field. * * Drivers must assume that the clock source is not enabled. * * clk_bulk_get should not be called from within interrupt context. */ int __must_check devm_clk_bulk_get_optional(struct device *dev, int num_clks, struct clk_bulk_data *clks); /** * devm_clk_bulk_get_all - managed get multiple clk consumers * @dev: device for clock "consumer" * @clks: pointer to the clk_bulk_data table of consumer * * Returns a positive value for the number of clocks obtained while the * clock references are stored in the clk_bulk_data table in @clks field. * Returns 0 if there're none and a negative value if something failed. * * This helper function allows drivers to get several clk * consumers in one operation with management, the clks will * automatically be freed when the device is unbound. */ int __must_check devm_clk_bulk_get_all(struct device *dev, struct clk_bulk_data **clks); /** * devm_clk_bulk_get_all_enabled - Get and enable all clocks of the consumer (managed) * @dev: device for clock "consumer" * @clks: pointer to the clk_bulk_data table of consumer * * Returns a positive value for the number of clocks obtained while the * clock references are stored in the clk_bulk_data table in @clks field. * Returns 0 if there're none and a negative value if something failed. * * This helper function allows drivers to get all clocks of the * consumer and enables them in one operation with management. * The clks will automatically be disabled and freed when the device * is unbound. */ int __must_check devm_clk_bulk_get_all_enabled(struct device *dev, struct clk_bulk_data **clks); /** * devm_clk_get - lookup and obtain a managed reference to a clock producer. * @dev: device for clock "consumer" * @id: clock consumer ID * * Context: May sleep. * * Return: a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev and @id to determine the clock consumer, and thereby * the clock producer. (IOW, @id may be identical strings, but * clk_get may return different clock producers depending on @dev.) * * Drivers must assume that the clock source is neither prepared nor * enabled. * * The clock will automatically be freed when the device is unbound * from the bus. */ struct clk *devm_clk_get(struct device *dev, const char *id); /** * devm_clk_get_prepared - devm_clk_get() + clk_prepare() * @dev: device for clock "consumer" * @id: clock consumer ID * * Context: May sleep. * * Return: a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev and @id to determine the clock consumer, and thereby * the clock producer. (IOW, @id may be identical strings, but * clk_get may return different clock producers depending on @dev.) * * The returned clk (if valid) is prepared. Drivers must however assume * that the clock is not enabled. * * The clock will automatically be unprepared and freed when the device * is unbound from the bus. */ struct clk *devm_clk_get_prepared(struct device *dev, const char *id); /** * devm_clk_get_enabled - devm_clk_get() + clk_prepare_enable() * @dev: device for clock "consumer" * @id: clock consumer ID * * Context: May sleep. * * Return: a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev and @id to determine the clock consumer, and thereby * the clock producer. (IOW, @id may be identical strings, but * clk_get may return different clock producers depending on @dev.) * * The returned clk (if valid) is prepared and enabled. * * The clock will automatically be disabled, unprepared and freed * when the device is unbound from the bus. */ struct clk *devm_clk_get_enabled(struct device *dev, const char *id); /** * devm_clk_get_optional - lookup and obtain a managed reference to an optional * clock producer. * @dev: device for clock "consumer" * @id: clock consumer ID * * Context: May sleep. * * Return: a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev and @id to determine the clock consumer, and thereby * the clock producer. If no such clk is found, it returns NULL * which serves as a dummy clk. That's the only difference compared * to devm_clk_get(). * * Drivers must assume that the clock source is neither prepared nor * enabled. * * The clock will automatically be freed when the device is unbound * from the bus. */ struct clk *devm_clk_get_optional(struct device *dev, const char *id); /** * devm_clk_get_optional_prepared - devm_clk_get_optional() + clk_prepare() * @dev: device for clock "consumer" * @id: clock consumer ID * * Context: May sleep. * * Return: a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev and @id to determine the clock consumer, and thereby * the clock producer. If no such clk is found, it returns NULL * which serves as a dummy clk. That's the only difference compared * to devm_clk_get_prepared(). * * The returned clk (if valid) is prepared. Drivers must however * assume that the clock is not enabled. * * The clock will automatically be unprepared and freed when the * device is unbound from the bus. */ struct clk *devm_clk_get_optional_prepared(struct device *dev, const char *id); /** * devm_clk_get_optional_enabled - devm_clk_get_optional() + * clk_prepare_enable() * @dev: device for clock "consumer" * @id: clock consumer ID * * Context: May sleep. * * Return: a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev and @id to determine the clock consumer, and thereby * the clock producer. If no such clk is found, it returns NULL * which serves as a dummy clk. That's the only difference compared * to devm_clk_get_enabled(). * * The returned clk (if valid) is prepared and enabled. * * The clock will automatically be disabled, unprepared and freed * when the device is unbound from the bus. */ struct clk *devm_clk_get_optional_enabled(struct device *dev, const char *id); /** * devm_clk_get_optional_enabled_with_rate - devm_clk_get_optional() + * clk_set_rate() + * clk_prepare_enable() * @dev: device for clock "consumer" * @id: clock consumer ID * @rate: new clock rate * * Context: May sleep. * * Return: a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev and @id to determine the clock consumer, and thereby * the clock producer. If no such clk is found, it returns NULL * which serves as a dummy clk. That's the only difference compared * to devm_clk_get_enabled(). * * The returned clk (if valid) is prepared and enabled and rate was set. * * The clock will automatically be disabled, unprepared and freed * when the device is unbound from the bus. */ struct clk *devm_clk_get_optional_enabled_with_rate(struct device *dev, const char *id, unsigned long rate); /** * devm_get_clk_from_child - lookup and obtain a managed reference to a * clock producer from child node. * @dev: device for clock "consumer" * @np: pointer to clock consumer node * @con_id: clock consumer ID * * This function parses the clocks, and uses them to look up the * struct clk from the registered list of clock providers by using * @np and @con_id * * The clock will automatically be freed when the device is unbound * from the bus. */ struct clk *devm_get_clk_from_child(struct device *dev, struct device_node *np, const char *con_id); /** * clk_enable - inform the system when the clock source should be running. * @clk: clock source * * If the clock can not be enabled/disabled, this should return success. * * May be called from atomic contexts. * * Returns success (0) or negative errno. */ int clk_enable(struct clk *clk); /** * clk_bulk_enable - inform the system when the set of clks should be running. * @num_clks: the number of clk_bulk_data * @clks: the clk_bulk_data table of consumer * * May be called from atomic contexts. * * Returns success (0) or negative errno. */ int __must_check clk_bulk_enable(int num_clks, const struct clk_bulk_data *clks); /** * clk_disable - inform the system when the clock source is no longer required. * @clk: clock source * * Inform the system that a clock source is no longer required by * a driver and may be shut down. * * May be called from atomic contexts. * * Implementation detail: if the clock source is shared between * multiple drivers, clk_enable() calls must be balanced by the * same number of clk_disable() calls for the clock source to be * disabled. */ void clk_disable(struct clk *clk); /** * clk_bulk_disable - inform the system when the set of clks is no * longer required. * @num_clks: the number of clk_bulk_data * @clks: the clk_bulk_data table of consumer * * Inform the system that a set of clks is no longer required by * a driver and may be shut down. * * May be called from atomic contexts. * * Implementation detail: if the set of clks is shared between * multiple drivers, clk_bulk_enable() calls must be balanced by the * same number of clk_bulk_disable() calls for the clock source to be * disabled. */ void clk_bulk_disable(int num_clks, const struct clk_bulk_data *clks); /** * clk_get_rate - obtain the current clock rate (in Hz) for a clock source. * This is only valid once the clock source has been enabled. * @clk: clock source */ unsigned long clk_get_rate(struct clk *clk); /** * clk_put - "free" the clock source * @clk: clock source * * Note: drivers must ensure that all clk_enable calls made on this * clock source are balanced by clk_disable calls prior to calling * this function. * * clk_put should not be called from within interrupt context. */ void clk_put(struct clk *clk); /** * clk_bulk_put - "free" the clock source * @num_clks: the number of clk_bulk_data * @clks: the clk_bulk_data table of consumer * * Note: drivers must ensure that all clk_bulk_enable calls made on this * clock source are balanced by clk_bulk_disable calls prior to calling * this function. * * clk_bulk_put should not be called from within interrupt context. */ void clk_bulk_put(int num_clks, struct clk_bulk_data *clks); /** * clk_bulk_put_all - "free" all the clock source * @num_clks: the number of clk_bulk_data * @clks: the clk_bulk_data table of consumer * * Note: drivers must ensure that all clk_bulk_enable calls made on this * clock source are balanced by clk_bulk_disable calls prior to calling * this function. * * clk_bulk_put_all should not be called from within interrupt context. */ void clk_bulk_put_all(int num_clks, struct clk_bulk_data *clks); /** * devm_clk_put - "free" a managed clock source * @dev: device used to acquire the clock * @clk: clock source acquired with devm_clk_get() * * Note: drivers must ensure that all clk_enable calls made on this * clock source are balanced by clk_disable calls prior to calling * this function. * * clk_put should not be called from within interrupt context. */ void devm_clk_put(struct device *dev, struct clk *clk); /* * The remaining APIs are optional for machine class support. */ /** * clk_round_rate - adjust a rate to the exact rate a clock can provide * @clk: clock source * @rate: desired clock rate in Hz * * This answers the question "if I were to pass @rate to clk_set_rate(), * what clock rate would I end up with?" without changing the hardware * in any way. In other words: * * rate = clk_round_rate(clk, r); * * and: * * clk_set_rate(clk, r); * rate = clk_get_rate(clk); * * are equivalent except the former does not modify the clock hardware * in any way. * * Returns rounded clock rate in Hz, or negative errno. */ long clk_round_rate(struct clk *clk, unsigned long rate); /** * clk_set_rate - set the clock rate for a clock source * @clk: clock source * @rate: desired clock rate in Hz * * Updating the rate starts at the top-most affected clock and then * walks the tree down to the bottom-most clock that needs updating. * * Returns success (0) or negative errno. */ int clk_set_rate(struct clk *clk, unsigned long rate); /** * clk_set_rate_exclusive- set the clock rate and claim exclusivity over * clock source * @clk: clock source * @rate: desired clock rate in Hz * * This helper function allows drivers to atomically set the rate of a producer * and claim exclusivity over the rate control of the producer. * * It is essentially a combination of clk_set_rate() and * clk_rate_exclusite_get(). Caller must balance this call with a call to * clk_rate_exclusive_put() * * Returns success (0) or negative errno. */ int clk_set_rate_exclusive(struct clk *clk, unsigned long rate); /** * clk_has_parent - check if a clock is a possible parent for another * @clk: clock source * @parent: parent clock source * * This function can be used in drivers that need to check that a clock can be * the parent of another without actually changing the parent. * * Returns true if @parent is a possible parent for @clk, false otherwise. */ bool clk_has_parent(const struct clk *clk, const struct clk *parent); /** * clk_set_rate_range - set a rate range for a clock source * @clk: clock source * @min: desired minimum clock rate in Hz, inclusive * @max: desired maximum clock rate in Hz, inclusive * * Returns success (0) or negative errno. */ int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max); /** * clk_set_min_rate - set a minimum clock rate for a clock source * @clk: clock source * @rate: desired minimum clock rate in Hz, inclusive * * Returns success (0) or negative errno. */ int clk_set_min_rate(struct clk *clk, unsigned long rate); /** * clk_set_max_rate - set a maximum clock rate for a clock source * @clk: clock source * @rate: desired maximum clock rate in Hz, inclusive * * Returns success (0) or negative errno. */ int clk_set_max_rate(struct clk *clk, unsigned long rate); /** * clk_set_parent - set the parent clock source for this clock * @clk: clock source * @parent: parent clock source * * Returns success (0) or negative errno. */ int clk_set_parent(struct clk *clk, struct clk *parent); /** * clk_get_parent - get the parent clock source for this clock * @clk: clock source * * Returns struct clk corresponding to parent clock source, or * valid IS_ERR() condition containing errno. */ struct clk *clk_get_parent(struct clk *clk); /** * clk_get_sys - get a clock based upon the device name * @dev_id: device name * @con_id: connection ID * * Returns a struct clk corresponding to the clock producer, or * valid IS_ERR() condition containing errno. The implementation * uses @dev_id and @con_id to determine the clock consumer, and * thereby the clock producer. In contrast to clk_get() this function * takes the device name instead of the device itself for identification. * * Drivers must assume that the clock source is not enabled. * * clk_get_sys should not be called from within interrupt context. */ struct clk *clk_get_sys(const char *dev_id, const char *con_id); /** * clk_save_context - save clock context for poweroff * * Saves the context of the clock register for powerstates in which the * contents of the registers will be lost. Occurs deep within the suspend * code so locking is not necessary. */ int clk_save_context(void); /** * clk_restore_context - restore clock context after poweroff * * This occurs with all clocks enabled. Occurs deep within the resume code * so locking is not necessary. */ void clk_restore_context(void); #else /* !CONFIG_HAVE_CLK */ static inline struct clk *clk_get(struct device *dev, const char *id) { return NULL; } static inline int __must_check clk_bulk_get(struct device *dev, int num_clks, struct clk_bulk_data *clks) { return 0; } static inline int __must_check clk_bulk_get_optional(struct device *dev, int num_clks, struct clk_bulk_data *clks) { return 0; } static inline int __must_check clk_bulk_get_all(struct device *dev, struct clk_bulk_data **clks) { return 0; } static inline struct clk *devm_clk_get(struct device *dev, const char *id) { return NULL; } static inline struct clk *devm_clk_get_prepared(struct device *dev, const char *id) { return NULL; } static inline struct clk *devm_clk_get_enabled(struct device *dev, const char *id) { return NULL; } static inline struct clk *devm_clk_get_optional(struct device *dev, const char *id) { return NULL; } static inline struct clk *devm_clk_get_optional_prepared(struct device *dev, const char *id) { return NULL; } static inline struct clk *devm_clk_get_optional_enabled(struct device *dev, const char *id) { return NULL; } static inline struct clk * devm_clk_get_optional_enabled_with_rate(struct device *dev, const char *id, unsigned long rate) { return NULL; } static inline int __must_check devm_clk_bulk_get(struct device *dev, int num_clks, struct clk_bulk_data *clks) { return 0; } static inline int __must_check devm_clk_bulk_get_optional(struct device *dev, int num_clks, struct clk_bulk_data *clks) { return 0; } static inline int __must_check devm_clk_bulk_get_all(struct device *dev, struct clk_bulk_data **clks) { return 0; } static inline int __must_check devm_clk_bulk_get_all_enabled(struct device *dev, struct clk_bulk_data **clks) { return 0; } static inline struct clk *devm_get_clk_from_child(struct device *dev, struct device_node *np, const char *con_id) { return NULL; } static inline void clk_put(struct clk *clk) {} static inline void clk_bulk_put(int num_clks, struct clk_bulk_data *clks) {} static inline void clk_bulk_put_all(int num_clks, struct clk_bulk_data *clks) {} static inline void devm_clk_put(struct device *dev, struct clk *clk) {} static inline int clk_enable(struct clk *clk) { return 0; } static inline int __must_check clk_bulk_enable(int num_clks, const struct clk_bulk_data *clks) { return 0; } static inline void clk_disable(struct clk *clk) {} static inline void clk_bulk_disable(int num_clks, const struct clk_bulk_data *clks) {} static inline unsigned long clk_get_rate(struct clk *clk) { return 0; } static inline int clk_set_rate(struct clk *clk, unsigned long rate) { return 0; } static inline int clk_set_rate_exclusive(struct clk *clk, unsigned long rate) { return 0; } static inline long clk_round_rate(struct clk *clk, unsigned long rate) { return 0; } static inline bool clk_has_parent(struct clk *clk, struct clk *parent) { return true; } static inline int clk_set_rate_range(struct clk *clk, unsigned long min, unsigned long max) { return 0; } static inline int clk_set_min_rate(struct clk *clk, unsigned long rate) { return 0; } static inline int clk_set_max_rate(struct clk *clk, unsigned long rate) { return 0; } static inline int clk_set_parent(struct clk *clk, struct clk *parent) { return 0; } static inline struct clk *clk_get_parent(struct clk *clk) { return NULL; } static inline struct clk *clk_get_sys(const char *dev_id, const char *con_id) { return NULL; } static inline int clk_save_context(void) { return 0; } static inline void clk_restore_context(void) {} #endif /* clk_prepare_enable helps cases using clk_enable in non-atomic context. */ static inline int clk_prepare_enable(struct clk *clk) { int ret; ret = clk_prepare(clk); if (ret) return ret; ret = clk_enable(clk); if (ret) clk_unprepare(clk); return ret; } /* clk_disable_unprepare helps cases using clk_disable in non-atomic context. */ static inline void clk_disable_unprepare(struct clk *clk) { clk_disable(clk); clk_unprepare(clk); } static inline int __must_check clk_bulk_prepare_enable(int num_clks, const struct clk_bulk_data *clks) { int ret; ret = clk_bulk_prepare(num_clks, clks); if (ret) return ret; ret = clk_bulk_enable(num_clks, clks); if (ret) clk_bulk_unprepare(num_clks, clks); return ret; } static inline void clk_bulk_disable_unprepare(int num_clks, const struct clk_bulk_data *clks) { clk_bulk_disable(num_clks, clks); clk_bulk_unprepare(num_clks, clks); } /** * clk_drop_range - Reset any range set on that clock * @clk: clock source * * Returns success (0) or negative errno. */ static inline int clk_drop_range(struct clk *clk) { return clk_set_rate_range(clk, 0, ULONG_MAX); } /** * clk_get_optional - lookup and obtain a reference to an optional clock * producer. * @dev: device for clock "consumer" * @id: clock consumer ID * * Behaves the same as clk_get() except where there is no clock producer. In * this case, instead of returning -ENOENT, the function returns NULL. */ static inline struct clk *clk_get_optional(struct device *dev, const char *id) { struct clk *clk = clk_get(dev, id); if (clk == ERR_PTR(-ENOENT)) return NULL; return clk; } #if defined(CONFIG_OF) && defined(CONFIG_COMMON_CLK) struct clk *of_clk_get(struct device_node *np, int index); struct clk *of_clk_get_by_name(struct device_node *np, const char *name); struct clk *of_clk_get_from_provider(struct of_phandle_args *clkspec); #else static inline struct clk *of_clk_get(struct device_node *np, int index) { return ERR_PTR(-ENOENT); } static inline struct clk *of_clk_get_by_name(struct device_node *np, const char *name) { return ERR_PTR(-ENOENT); } static inline struct clk *of_clk_get_from_provider(struct of_phandle_args *clkspec) { return ERR_PTR(-ENOENT); } #endif #endif |
4 1 1 4 84 82 84 4 4 4 13 13 8 13 7 13 1 7 1 7 7 7 9 9 9 1 7 5 7 4 3 2 1 115 6 114 5 5 4 1 1 74 74 72 23 24 4 3 2 60 61 3 2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 | // SPDX-License-Identifier: GPL-2.0-only /* * Landlock - Ptrace and scope hooks * * Copyright © 2017-2020 Mickaël Salaün <mic@digikod.net> * Copyright © 2019-2020 ANSSI * Copyright © 2024-2025 Microsoft Corporation */ #include <asm/current.h> #include <linux/cleanup.h> #include <linux/cred.h> #include <linux/errno.h> #include <linux/kernel.h> #include <linux/lsm_audit.h> #include <linux/lsm_hooks.h> #include <linux/rcupdate.h> #include <linux/sched.h> #include <linux/sched/signal.h> #include <net/af_unix.h> #include <net/sock.h> #include "audit.h" #include "common.h" #include "cred.h" #include "domain.h" #include "fs.h" #include "ruleset.h" #include "setup.h" #include "task.h" /** * domain_scope_le - Checks domain ordering for scoped ptrace * * @parent: Parent domain. * @child: Potential child of @parent. * * Checks if the @parent domain is less or equal to (i.e. an ancestor, which * means a subset of) the @child domain. */ static bool domain_scope_le(const struct landlock_ruleset *const parent, const struct landlock_ruleset *const child) { const struct landlock_hierarchy *walker; /* Quick return for non-landlocked tasks. */ if (!parent) return true; if (!child) return false; for (walker = child->hierarchy; walker; walker = walker->parent) { if (walker == parent->hierarchy) /* @parent is in the scoped hierarchy of @child. */ return true; } /* There is no relationship between @parent and @child. */ return false; } static int domain_ptrace(const struct landlock_ruleset *const parent, const struct landlock_ruleset *const child) { if (domain_scope_le(parent, child)) return 0; return -EPERM; } /** * hook_ptrace_access_check - Determines whether the current process may access * another * * @child: Process to be accessed. * @mode: Mode of attachment. * * If the current task has Landlock rules, then the child must have at least * the same rules. Else denied. * * Determines whether a process may access another, returning 0 if permission * granted, -errno if denied. */ static int hook_ptrace_access_check(struct task_struct *const child, const unsigned int mode) { const struct landlock_cred_security *parent_subject; const struct landlock_ruleset *child_dom; int err; /* Quick return for non-landlocked tasks. */ parent_subject = landlock_cred(current_cred()); if (!parent_subject) return 0; scoped_guard(rcu) { child_dom = landlock_get_task_domain(child); err = domain_ptrace(parent_subject->domain, child_dom); } if (!err) return 0; /* * For the ptrace_access_check case, we log the current/parent domain * and the child task. */ if (!(mode & PTRACE_MODE_NOAUDIT)) landlock_log_denial(parent_subject, &(struct landlock_request) { .type = LANDLOCK_REQUEST_PTRACE, .audit = { .type = LSM_AUDIT_DATA_TASK, .u.tsk = child, }, .layer_plus_one = parent_subject->domain->num_layers, }); return err; } /** * hook_ptrace_traceme - Determines whether another process may trace the * current one * * @parent: Task proposed to be the tracer. * * If the parent has Landlock rules, then the current task must have the same * or more rules. Else denied. * * Determines whether the nominated task is permitted to trace the current * process, returning 0 if permission is granted, -errno if denied. */ static int hook_ptrace_traceme(struct task_struct *const parent) { const struct landlock_cred_security *parent_subject; const struct landlock_ruleset *child_dom; int err; child_dom = landlock_get_current_domain(); guard(rcu)(); parent_subject = landlock_cred(__task_cred(parent)); err = domain_ptrace(parent_subject->domain, child_dom); if (!err) return 0; /* * For the ptrace_traceme case, we log the domain which is the cause of * the denial, which means the parent domain instead of the current * domain. This may look unusual because the ptrace_traceme action is a * request to be traced, but the semantic is consistent with * hook_ptrace_access_check(). */ landlock_log_denial(parent_subject, &(struct landlock_request) { .type = LANDLOCK_REQUEST_PTRACE, .audit = { .type = LSM_AUDIT_DATA_TASK, .u.tsk = current, }, .layer_plus_one = parent_subject->domain->num_layers, }); return err; } /** * domain_is_scoped - Checks if the client domain is scoped in the same * domain as the server. * * @client: IPC sender domain. * @server: IPC receiver domain. * @scope: The scope restriction criteria. * * Returns: True if the @client domain is scoped to access the @server, * unless the @server is also scoped in the same domain as @client. */ static bool domain_is_scoped(const struct landlock_ruleset *const client, const struct landlock_ruleset *const server, access_mask_t scope) { int client_layer, server_layer; const struct landlock_hierarchy *client_walker, *server_walker; /* Quick return if client has no domain */ if (WARN_ON_ONCE(!client)) return false; client_layer = client->num_layers - 1; client_walker = client->hierarchy; /* * client_layer must be a signed integer with greater capacity * than client->num_layers to ensure the following loop stops. */ BUILD_BUG_ON(sizeof(client_layer) > sizeof(client->num_layers)); server_layer = server ? (server->num_layers - 1) : -1; server_walker = server ? server->hierarchy : NULL; /* * Walks client's parent domains down to the same hierarchy level * as the server's domain, and checks that none of these client's * parent domains are scoped. */ for (; client_layer > server_layer; client_layer--) { if (landlock_get_scope_mask(client, client_layer) & scope) return true; client_walker = client_walker->parent; } /* * Walks server's parent domains down to the same hierarchy level as * the client's domain. */ for (; server_layer > client_layer; server_layer--) server_walker = server_walker->parent; for (; client_layer >= 0; client_layer--) { if (landlock_get_scope_mask(client, client_layer) & scope) { /* * Client and server are at the same level in the * hierarchy. If the client is scoped, the request is * only allowed if this domain is also a server's * ancestor. */ return server_walker != client_walker; } client_walker = client_walker->parent; server_walker = server_walker->parent; } return false; } static bool sock_is_scoped(struct sock *const other, const struct landlock_ruleset *const domain) { const struct landlock_ruleset *dom_other; /* The credentials will not change. */ lockdep_assert_held(&unix_sk(other)->lock); dom_other = landlock_cred(other->sk_socket->file->f_cred)->domain; return domain_is_scoped(domain, dom_other, LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET); } static bool is_abstract_socket(struct sock *const sock) { struct unix_address *addr = unix_sk(sock)->addr; if (!addr) return false; if (addr->len >= offsetof(struct sockaddr_un, sun_path) + 1 && addr->name->sun_path[0] == '\0') return true; return false; } static const struct access_masks unix_scope = { .scope = LANDLOCK_SCOPE_ABSTRACT_UNIX_SOCKET, }; static int hook_unix_stream_connect(struct sock *const sock, struct sock *const other, struct sock *const newsk) { size_t handle_layer; const struct landlock_cred_security *const subject = landlock_get_applicable_subject(current_cred(), unix_scope, &handle_layer); /* Quick return for non-landlocked tasks. */ if (!subject) return 0; if (!is_abstract_socket(other)) return 0; if (!sock_is_scoped(other, subject->domain)) return 0; landlock_log_denial(subject, &(struct landlock_request) { .type = LANDLOCK_REQUEST_SCOPE_ABSTRACT_UNIX_SOCKET, .audit = { .type = LSM_AUDIT_DATA_NET, .u.net = &(struct lsm_network_audit) { .sk = other, }, }, .layer_plus_one = handle_layer + 1, }); return -EPERM; } static int hook_unix_may_send(struct socket *const sock, struct socket *const other) { size_t handle_layer; const struct landlock_cred_security *const subject = landlock_get_applicable_subject(current_cred(), unix_scope, &handle_layer); if (!subject) return 0; /* * Checks if this datagram socket was already allowed to be connected * to other. */ if (unix_peer(sock->sk) == other->sk) return 0; if (!is_abstract_socket(other->sk)) return 0; if (!sock_is_scoped(other->sk, subject->domain)) return 0; landlock_log_denial(subject, &(struct landlock_request) { .type = LANDLOCK_REQUEST_SCOPE_ABSTRACT_UNIX_SOCKET, .audit = { .type = LSM_AUDIT_DATA_NET, .u.net = &(struct lsm_network_audit) { .sk = other->sk, }, }, .layer_plus_one = handle_layer + 1, }); return -EPERM; } static const struct access_masks signal_scope = { .scope = LANDLOCK_SCOPE_SIGNAL, }; static int hook_task_kill(struct task_struct *const p, struct kernel_siginfo *const info, const int sig, const struct cred *cred) { bool is_scoped; size_t handle_layer; const struct landlock_cred_security *subject; if (!cred) { /* * Always allow sending signals between threads of the same process. * This is required for process credential changes by the Native POSIX * Threads Library and implemented by the set*id(2) wrappers and * libcap(3) with tgkill(2). See nptl(7) and libpsx(3). * * This exception is similar to the __ptrace_may_access() one. */ if (same_thread_group(p, current)) return 0; /* Not dealing with USB IO. */ cred = current_cred(); } subject = landlock_get_applicable_subject(cred, signal_scope, &handle_layer); /* Quick return for non-landlocked tasks. */ if (!subject) return 0; scoped_guard(rcu) { is_scoped = domain_is_scoped(subject->domain, landlock_get_task_domain(p), signal_scope.scope); } if (!is_scoped) return 0; landlock_log_denial(subject, &(struct landlock_request) { .type = LANDLOCK_REQUEST_SCOPE_SIGNAL, .audit = { .type = LSM_AUDIT_DATA_TASK, .u.tsk = p, }, .layer_plus_one = handle_layer + 1, }); return -EPERM; } static int hook_file_send_sigiotask(struct task_struct *tsk, struct fown_struct *fown, int signum) { const struct landlock_cred_security *subject; bool is_scoped = false; /* Lock already held by send_sigio() and send_sigurg(). */ lockdep_assert_held(&fown->lock); subject = &landlock_file(fown->file)->fown_subject; /* * Quick return for unowned socket. * * subject->domain has already been filtered when saved by * hook_file_set_fowner(), so there is no need to call * landlock_get_applicable_subject() here. */ if (!subject->domain) return 0; scoped_guard(rcu) { is_scoped = domain_is_scoped(subject->domain, landlock_get_task_domain(tsk), signal_scope.scope); } if (!is_scoped) return 0; landlock_log_denial(subject, &(struct landlock_request) { .type = LANDLOCK_REQUEST_SCOPE_SIGNAL, .audit = { .type = LSM_AUDIT_DATA_TASK, .u.tsk = tsk, }, #ifdef CONFIG_AUDIT .layer_plus_one = landlock_file(fown->file)->fown_layer + 1, #endif /* CONFIG_AUDIT */ }); return -EPERM; } static struct security_hook_list landlock_hooks[] __ro_after_init = { LSM_HOOK_INIT(ptrace_access_check, hook_ptrace_access_check), LSM_HOOK_INIT(ptrace_traceme, hook_ptrace_traceme), LSM_HOOK_INIT(unix_stream_connect, hook_unix_stream_connect), LSM_HOOK_INIT(unix_may_send, hook_unix_may_send), LSM_HOOK_INIT(task_kill, hook_task_kill), LSM_HOOK_INIT(file_send_sigiotask, hook_file_send_sigiotask), }; __init void landlock_add_task_hooks(void) { security_add_hooks(landlock_hooks, ARRAY_SIZE(landlock_hooks), &landlock_lsmid); } |
23 23 23 23 243 53 5772 243 3147 108 109 109 3129 23 23 14 23 23 23 23 23 14 23 22 23 15 15 15 5757 3121 5753 3 5771 5763 3129 5761 5771 5771 5760 109 5758 5753 233 5752 5753 5755 5768 5769 29 29 28 29 29 12 633 632 299 1 1 1 1 1 1 1 1 23 23 23 23 23 23 23 5 5 5 5753 3 3 3 3 3 3 3 3 3 1 1 1 1 1 1 2 2 28 29 5 5 267 269 268 264 269 13 13 13 53 53 53 53 53 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 | // SPDX-License-Identifier: GPL-2.0-only /* * Copyright (C) 2016 Facebook * Copyright (C) 2013-2014 Jens Axboe */ #include <linux/sched.h> #include <linux/random.h> #include <linux/sbitmap.h> #include <linux/seq_file.h> static int init_alloc_hint(struct sbitmap *sb, gfp_t flags) { unsigned depth = sb->depth; sb->alloc_hint = alloc_percpu_gfp(unsigned int, flags); if (!sb->alloc_hint) return -ENOMEM; if (depth && !sb->round_robin) { int i; for_each_possible_cpu(i) *per_cpu_ptr(sb->alloc_hint, i) = get_random_u32_below(depth); } return 0; } static inline unsigned update_alloc_hint_before_get(struct sbitmap *sb, unsigned int depth) { unsigned hint; hint = this_cpu_read(*sb->alloc_hint); if (unlikely(hint >= depth)) { hint = depth ? get_random_u32_below(depth) : 0; this_cpu_write(*sb->alloc_hint, hint); } return hint; } static inline void update_alloc_hint_after_get(struct sbitmap *sb, unsigned int depth, unsigned int hint, unsigned int nr) { if (nr == -1) { /* If the map is full, a hint won't do us much good. */ this_cpu_write(*sb->alloc_hint, 0); } else if (nr == hint || unlikely(sb->round_robin)) { /* Only update the hint if we used it. */ hint = nr + 1; if (hint >= depth - 1) hint = 0; this_cpu_write(*sb->alloc_hint, hint); } } /* * See if we have deferred clears that we can batch move */ static inline bool sbitmap_deferred_clear(struct sbitmap_word *map, unsigned int depth, unsigned int alloc_hint, bool wrap) { unsigned long mask, word_mask; guard(raw_spinlock_irqsave)(&map->swap_lock); if (!map->cleared) { if (depth == 0) return false; word_mask = (~0UL) >> (BITS_PER_LONG - depth); /* * The current behavior is to always retry after moving * ->cleared to word, and we change it to retry in case * of any free bits. To avoid an infinite loop, we need * to take wrap & alloc_hint into account, otherwise a * soft lockup may occur. */ if (!wrap && alloc_hint) word_mask &= ~((1UL << alloc_hint) - 1); return (READ_ONCE(map->word) & word_mask) != word_mask; } /* * First get a stable cleared mask, setting the old mask to 0. */ mask = xchg(&map->cleared, 0); /* * Now clear the masked bits in our free word */ atomic_long_andnot(mask, (atomic_long_t *)&map->word); BUILD_BUG_ON(sizeof(atomic_long_t) != sizeof(map->word)); return true; } int sbitmap_init_node(struct sbitmap *sb, unsigned int depth, int shift, gfp_t flags, int node, bool round_robin, bool alloc_hint) { unsigned int bits_per_word; int i; if (shift < 0) shift = sbitmap_calculate_shift(depth); bits_per_word = 1U << shift; if (bits_per_word > BITS_PER_LONG) return -EINVAL; sb->shift = shift; sb->depth = depth; sb->map_nr = DIV_ROUND_UP(sb->depth, bits_per_word); sb->round_robin = round_robin; if (depth == 0) { sb->map = NULL; return 0; } if (alloc_hint) { if (init_alloc_hint(sb, flags)) return -ENOMEM; } else { sb->alloc_hint = NULL; } sb->map = kvzalloc_node(sb->map_nr * sizeof(*sb->map), flags, node); if (!sb->map) { free_percpu(sb->alloc_hint); return -ENOMEM; } for (i = 0; i < sb->map_nr; i++) raw_spin_lock_init(&sb->map[i].swap_lock); return 0; } EXPORT_SYMBOL_GPL(sbitmap_init_node); void sbitmap_resize(struct sbitmap *sb, unsigned int depth) { unsigned int bits_per_word = 1U << sb->shift; unsigned int i; for (i = 0; i < sb->map_nr; i++) sbitmap_deferred_clear(&sb->map[i], 0, 0, 0); sb->depth = depth; sb->map_nr = DIV_ROUND_UP(sb->depth, bits_per_word); } EXPORT_SYMBOL_GPL(sbitmap_resize); static int __sbitmap_get_word(unsigned long *word, unsigned long depth, unsigned int hint, bool wrap) { int nr; /* don't wrap if starting from 0 */ wrap = wrap && hint; while (1) { nr = find_next_zero_bit(word, depth, hint); if (unlikely(nr >= depth)) { /* * We started with an offset, and we didn't reset the * offset to 0 in a failure case, so start from 0 to * exhaust the map. */ if (hint && wrap) { hint = 0; continue; } return -1; } if (!test_and_set_bit_lock(nr, word)) break; hint = nr + 1; if (hint >= depth - 1) hint = 0; } return nr; } static int sbitmap_find_bit_in_word(struct sbitmap_word *map, unsigned int depth, unsigned int alloc_hint, bool wrap) { int nr; do { nr = __sbitmap_get_word(&map->word, depth, alloc_hint, wrap); if (nr != -1) break; if (!sbitmap_deferred_clear(map, depth, alloc_hint, wrap)) break; } while (1); return nr; } static int sbitmap_find_bit(struct sbitmap *sb, unsigned int depth, unsigned int index, unsigned int alloc_hint, bool wrap) { unsigned int i; int nr = -1; for (i = 0; i < sb->map_nr; i++) { nr = sbitmap_find_bit_in_word(&sb->map[index], min_t(unsigned int, __map_depth(sb, index), depth), alloc_hint, wrap); if (nr != -1) { nr += index << sb->shift; break; } /* Jump to next index. */ alloc_hint = 0; if (++index >= sb->map_nr) index = 0; } return nr; } static int __sbitmap_get(struct sbitmap *sb, unsigned int alloc_hint) { unsigned int index; index = SB_NR_TO_INDEX(sb, alloc_hint); /* * Unless we're doing round robin tag allocation, just use the * alloc_hint to find the right word index. No point in looping * twice in find_next_zero_bit() for that case. */ if (sb->round_robin) alloc_hint = SB_NR_TO_BIT(sb, alloc_hint); else alloc_hint = 0; return sbitmap_find_bit(sb, UINT_MAX, index, alloc_hint, !sb->round_robin); } int sbitmap_get(struct sbitmap *sb) { int nr; unsigned int hint, depth; if (WARN_ON_ONCE(unlikely(!sb->alloc_hint))) return -1; depth = READ_ONCE(sb->depth); hint = update_alloc_hint_before_get(sb, depth); nr = __sbitmap_get(sb, hint); update_alloc_hint_after_get(sb, depth, hint, nr); return nr; } EXPORT_SYMBOL_GPL(sbitmap_get); static int __sbitmap_get_shallow(struct sbitmap *sb, unsigned int alloc_hint, unsigned long shallow_depth) { unsigned int index; index = SB_NR_TO_INDEX(sb, alloc_hint); alloc_hint = SB_NR_TO_BIT(sb, alloc_hint); return sbitmap_find_bit(sb, shallow_depth, index, alloc_hint, true); } int sbitmap_get_shallow(struct sbitmap *sb, unsigned long shallow_depth) { int nr; unsigned int hint, depth; if (WARN_ON_ONCE(unlikely(!sb->alloc_hint))) return -1; depth = READ_ONCE(sb->depth); hint = update_alloc_hint_before_get(sb, depth); nr = __sbitmap_get_shallow(sb, hint, shallow_depth); update_alloc_hint_after_get(sb, depth, hint, nr); return nr; } EXPORT_SYMBOL_GPL(sbitmap_get_shallow); bool sbitmap_any_bit_set(const struct sbitmap *sb) { unsigned int i; for (i = 0; i < sb->map_nr; i++) { if (sb->map[i].word & ~sb->map[i].cleared) return true; } return false; } EXPORT_SYMBOL_GPL(sbitmap_any_bit_set); static unsigned int __sbitmap_weight(const struct sbitmap *sb, bool set) { unsigned int i, weight = 0; for (i = 0; i < sb->map_nr; i++) { const struct sbitmap_word *word = &sb->map[i]; unsigned int word_depth = __map_depth(sb, i); if (set) weight += bitmap_weight(&word->word, word_depth); else weight += bitmap_weight(&word->cleared, word_depth); } return weight; } static unsigned int sbitmap_cleared(const struct sbitmap *sb) { return __sbitmap_weight(sb, false); } unsigned int sbitmap_weight(const struct sbitmap *sb) { return __sbitmap_weight(sb, true) - sbitmap_cleared(sb); } EXPORT_SYMBOL_GPL(sbitmap_weight); void sbitmap_show(struct sbitmap *sb, struct seq_file *m) { seq_printf(m, "depth=%u\n", sb->depth); seq_printf(m, "busy=%u\n", sbitmap_weight(sb)); seq_printf(m, "cleared=%u\n", sbitmap_cleared(sb)); seq_printf(m, "bits_per_word=%u\n", 1U << sb->shift); seq_printf(m, "map_nr=%u\n", sb->map_nr); } EXPORT_SYMBOL_GPL(sbitmap_show); static inline void emit_byte(struct seq_file *m, unsigned int offset, u8 byte) { if ((offset & 0xf) == 0) { if (offset != 0) seq_putc(m, '\n'); seq_printf(m, "%08x:", offset); } if ((offset & 0x1) == 0) seq_putc(m, ' '); seq_printf(m, "%02x", byte); } void sbitmap_bitmap_show(struct sbitmap *sb, struct seq_file *m) { u8 byte = 0; unsigned int byte_bits = 0; unsigned int offset = 0; int i; for (i = 0; i < sb->map_nr; i++) { unsigned long word = READ_ONCE(sb->map[i].word); unsigned long cleared = READ_ONCE(sb->map[i].cleared); unsigned int word_bits = __map_depth(sb, i); word &= ~cleared; while (word_bits > 0) { unsigned int bits = min(8 - byte_bits, word_bits); byte |= (word & (BIT(bits) - 1)) << byte_bits; byte_bits += bits; if (byte_bits == 8) { emit_byte(m, offset, byte); byte = 0; byte_bits = 0; offset++; } word >>= bits; word_bits -= bits; } } if (byte_bits) { emit_byte(m, offset, byte); offset++; } if (offset) seq_putc(m, '\n'); } EXPORT_SYMBOL_GPL(sbitmap_bitmap_show); static unsigned int sbq_calc_wake_batch(struct sbitmap_queue *sbq, unsigned int depth) { unsigned int wake_batch; unsigned int shallow_depth; /* * Each full word of the bitmap has bits_per_word bits, and there might * be a partial word. There are depth / bits_per_word full words and * depth % bits_per_word bits left over. In bitwise arithmetic: * * bits_per_word = 1 << shift * depth / bits_per_word = depth >> shift * depth % bits_per_word = depth & ((1 << shift) - 1) * * Each word can be limited to sbq->min_shallow_depth bits. */ shallow_depth = min(1U << sbq->sb.shift, sbq->min_shallow_depth); depth = ((depth >> sbq->sb.shift) * shallow_depth + min(depth & ((1U << sbq->sb.shift) - 1), shallow_depth)); wake_batch = clamp_t(unsigned int, depth / SBQ_WAIT_QUEUES, 1, SBQ_WAKE_BATCH); return wake_batch; } int sbitmap_queue_init_node(struct sbitmap_queue *sbq, unsigned int depth, int shift, bool round_robin, gfp_t flags, int node) { int ret; int i; ret = sbitmap_init_node(&sbq->sb, depth, shift, flags, node, round_robin, true); if (ret) return ret; sbq->min_shallow_depth = UINT_MAX; sbq->wake_batch = sbq_calc_wake_batch(sbq, depth); atomic_set(&sbq->wake_index, 0); atomic_set(&sbq->ws_active, 0); atomic_set(&sbq->completion_cnt, 0); atomic_set(&sbq->wakeup_cnt, 0); sbq->ws = kzalloc_node(SBQ_WAIT_QUEUES * sizeof(*sbq->ws), flags, node); if (!sbq->ws) { sbitmap_free(&sbq->sb); return -ENOMEM; } for (i = 0; i < SBQ_WAIT_QUEUES; i++) init_waitqueue_head(&sbq->ws[i].wait); return 0; } EXPORT_SYMBOL_GPL(sbitmap_queue_init_node); static void sbitmap_queue_update_wake_batch(struct sbitmap_queue *sbq, unsigned int depth) { unsigned int wake_batch; wake_batch = sbq_calc_wake_batch(sbq, depth); if (sbq->wake_batch != wake_batch) WRITE_ONCE(sbq->wake_batch, wake_batch); } void sbitmap_queue_recalculate_wake_batch(struct sbitmap_queue *sbq, unsigned int users) { unsigned int wake_batch; unsigned int depth = (sbq->sb.depth + users - 1) / users; wake_batch = clamp_val(depth / SBQ_WAIT_QUEUES, 1, SBQ_WAKE_BATCH); WRITE_ONCE(sbq->wake_batch, wake_batch); } EXPORT_SYMBOL_GPL(sbitmap_queue_recalculate_wake_batch); void sbitmap_queue_resize(struct sbitmap_queue *sbq, unsigned int depth) { sbitmap_queue_update_wake_batch(sbq, depth); sbitmap_resize(&sbq->sb, depth); } EXPORT_SYMBOL_GPL(sbitmap_queue_resize); int __sbitmap_queue_get(struct sbitmap_queue *sbq) { return sbitmap_get(&sbq->sb); } EXPORT_SYMBOL_GPL(__sbitmap_queue_get); unsigned long __sbitmap_queue_get_batch(struct sbitmap_queue *sbq, int nr_tags, unsigned int *offset) { struct sbitmap *sb = &sbq->sb; unsigned int hint, depth; unsigned long index, nr; int i; if (unlikely(sb->round_robin)) return 0; depth = READ_ONCE(sb->depth); hint = update_alloc_hint_before_get(sb, depth); index = SB_NR_TO_INDEX(sb, hint); for (i = 0; i < sb->map_nr; i++) { struct sbitmap_word *map = &sb->map[index]; unsigned long get_mask; unsigned int map_depth = __map_depth(sb, index); unsigned long val; sbitmap_deferred_clear(map, 0, 0, 0); val = READ_ONCE(map->word); if (val == (1UL << (map_depth - 1)) - 1) goto next; nr = find_first_zero_bit(&val, map_depth); if (nr + nr_tags <= map_depth) { atomic_long_t *ptr = (atomic_long_t *) &map->word; get_mask = ((1UL << nr_tags) - 1) << nr; while (!atomic_long_try_cmpxchg(ptr, &val, get_mask | val)) ; get_mask = (get_mask & ~val) >> nr; if (get_mask) { *offset = nr + (index << sb->shift); update_alloc_hint_after_get(sb, depth, hint, *offset + nr_tags - 1); return get_mask; } } next: /* Jump to next index. */ if (++index >= sb->map_nr) index = 0; } return 0; } int sbitmap_queue_get_shallow(struct sbitmap_queue *sbq, unsigned int shallow_depth) { WARN_ON_ONCE(shallow_depth < sbq->min_shallow_depth); return sbitmap_get_shallow(&sbq->sb, shallow_depth); } EXPORT_SYMBOL_GPL(sbitmap_queue_get_shallow); void sbitmap_queue_min_shallow_depth(struct sbitmap_queue *sbq, unsigned int min_shallow_depth) { sbq->min_shallow_depth = min_shallow_depth; sbitmap_queue_update_wake_batch(sbq, sbq->sb.depth); } EXPORT_SYMBOL_GPL(sbitmap_queue_min_shallow_depth); static void __sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr) { int i, wake_index, woken; if (!atomic_read(&sbq->ws_active)) return; wake_index = atomic_read(&sbq->wake_index); for (i = 0; i < SBQ_WAIT_QUEUES; i++) { struct sbq_wait_state *ws = &sbq->ws[wake_index]; /* * Advance the index before checking the current queue. * It improves fairness, by ensuring the queue doesn't * need to be fully emptied before trying to wake up * from the next one. */ wake_index = sbq_index_inc(wake_index); if (waitqueue_active(&ws->wait)) { woken = wake_up_nr(&ws->wait, nr); if (woken == nr) break; nr -= woken; } } if (wake_index != atomic_read(&sbq->wake_index)) atomic_set(&sbq->wake_index, wake_index); } void sbitmap_queue_wake_up(struct sbitmap_queue *sbq, int nr) { unsigned int wake_batch = READ_ONCE(sbq->wake_batch); unsigned int wakeups; if (!atomic_read(&sbq->ws_active)) return; atomic_add(nr, &sbq->completion_cnt); wakeups = atomic_read(&sbq->wakeup_cnt); do { if (atomic_read(&sbq->completion_cnt) - wakeups < wake_batch) return; } while (!atomic_try_cmpxchg(&sbq->wakeup_cnt, &wakeups, wakeups + wake_batch)); __sbitmap_queue_wake_up(sbq, wake_batch); } EXPORT_SYMBOL_GPL(sbitmap_queue_wake_up); static inline void sbitmap_update_cpu_hint(struct sbitmap *sb, int cpu, int tag) { if (likely(!sb->round_robin && tag < sb->depth)) data_race(*per_cpu_ptr(sb->alloc_hint, cpu) = tag); } void sbitmap_queue_clear_batch(struct sbitmap_queue *sbq, int offset, int *tags, int nr_tags) { struct sbitmap *sb = &sbq->sb; unsigned long *addr = NULL; unsigned long mask = 0; int i; smp_mb__before_atomic(); for (i = 0; i < nr_tags; i++) { const int tag = tags[i] - offset; unsigned long *this_addr; /* since we're clearing a batch, skip the deferred map */ this_addr = &sb->map[SB_NR_TO_INDEX(sb, tag)].word; if (!addr) { addr = this_addr; } else if (addr != this_addr) { atomic_long_andnot(mask, (atomic_long_t *) addr); mask = 0; addr = this_addr; } mask |= (1UL << SB_NR_TO_BIT(sb, tag)); } if (mask) atomic_long_andnot(mask, (atomic_long_t *) addr); smp_mb__after_atomic(); sbitmap_queue_wake_up(sbq, nr_tags); sbitmap_update_cpu_hint(&sbq->sb, raw_smp_processor_id(), tags[nr_tags - 1] - offset); } void sbitmap_queue_clear(struct sbitmap_queue *sbq, unsigned int nr, unsigned int cpu) { /* * Once the clear bit is set, the bit may be allocated out. * * Orders READ/WRITE on the associated instance(such as request * of blk_mq) by this bit for avoiding race with re-allocation, * and its pair is the memory barrier implied in __sbitmap_get_word. * * One invariant is that the clear bit has to be zero when the bit * is in use. */ smp_mb__before_atomic(); sbitmap_deferred_clear_bit(&sbq->sb, nr); /* * Pairs with the memory barrier in set_current_state() to ensure the * proper ordering of clear_bit_unlock()/waitqueue_active() in the waker * and test_and_set_bit_lock()/prepare_to_wait()/finish_wait() in the * waiter. See the comment on waitqueue_active(). */ smp_mb__after_atomic(); sbitmap_queue_wake_up(sbq, 1); sbitmap_update_cpu_hint(&sbq->sb, cpu, nr); } EXPORT_SYMBOL_GPL(sbitmap_queue_clear); void sbitmap_queue_wake_all(struct sbitmap_queue *sbq) { int i, wake_index; /* * Pairs with the memory barrier in set_current_state() like in * sbitmap_queue_wake_up(). */ smp_mb(); wake_index = atomic_read(&sbq->wake_index); for (i = 0; i < SBQ_WAIT_QUEUES; i++) { struct sbq_wait_state *ws = &sbq->ws[wake_index]; if (waitqueue_active(&ws->wait)) wake_up(&ws->wait); wake_index = sbq_index_inc(wake_index); } } EXPORT_SYMBOL_GPL(sbitmap_queue_wake_all); void sbitmap_queue_show(struct sbitmap_queue *sbq, struct seq_file *m) { bool first; int i; sbitmap_show(&sbq->sb, m); seq_puts(m, "alloc_hint={"); first = true; for_each_possible_cpu(i) { if (!first) seq_puts(m, ", "); first = false; seq_printf(m, "%u", *per_cpu_ptr(sbq->sb.alloc_hint, i)); } seq_puts(m, "}\n"); seq_printf(m, "wake_batch=%u\n", sbq->wake_batch); seq_printf(m, "wake_index=%d\n", atomic_read(&sbq->wake_index)); seq_printf(m, "ws_active=%d\n", atomic_read(&sbq->ws_active)); seq_puts(m, "ws={\n"); for (i = 0; i < SBQ_WAIT_QUEUES; i++) { struct sbq_wait_state *ws = &sbq->ws[i]; seq_printf(m, "\t{.wait=%s},\n", waitqueue_active(&ws->wait) ? "active" : "inactive"); } seq_puts(m, "}\n"); seq_printf(m, "round_robin=%d\n", sbq->sb.round_robin); seq_printf(m, "min_shallow_depth=%u\n", sbq->min_shallow_depth); } EXPORT_SYMBOL_GPL(sbitmap_queue_show); void sbitmap_add_wait_queue(struct sbitmap_queue *sbq, struct sbq_wait_state *ws, struct sbq_wait *sbq_wait) { if (!sbq_wait->sbq) { sbq_wait->sbq = sbq; atomic_inc(&sbq->ws_active); add_wait_queue(&ws->wait, &sbq_wait->wait); } } EXPORT_SYMBOL_GPL(sbitmap_add_wait_queue); void sbitmap_del_wait_queue(struct sbq_wait *sbq_wait) { list_del_init(&sbq_wait->wait.entry); if (sbq_wait->sbq) { atomic_dec(&sbq_wait->sbq->ws_active); sbq_wait->sbq = NULL; } } EXPORT_SYMBOL_GPL(sbitmap_del_wait_queue); void sbitmap_prepare_to_wait(struct sbitmap_queue *sbq, struct sbq_wait_state *ws, struct sbq_wait *sbq_wait, int state) { if (!sbq_wait->sbq) { atomic_inc(&sbq->ws_active); sbq_wait->sbq = sbq; } prepare_to_wait_exclusive(&ws->wait, &sbq_wait->wait, state); } EXPORT_SYMBOL_GPL(sbitmap_prepare_to_wait); void sbitmap_finish_wait(struct sbitmap_queue *sbq, struct sbq_wait_state *ws, struct sbq_wait *sbq_wait) { finish_wait(&ws->wait, &sbq_wait->wait); if (sbq_wait->sbq) { atomic_dec(&sbq->ws_active); sbq_wait->sbq = NULL; } } EXPORT_SYMBOL_GPL(sbitmap_finish_wait); |
1 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 | // SPDX-License-Identifier: GPL-2.0 #include <linux/types.h> #include <linux/errno.h> #include <linux/tty.h> #include <linux/module.h> /* * n_null.c - Null line discipline used in the failure path * * Copyright (C) Intel 2017 */ static ssize_t n_null_read(struct tty_struct *tty, struct file *file, u8 *buf, size_t nr, void **cookie, unsigned long offset) { return -EOPNOTSUPP; } static ssize_t n_null_write(struct tty_struct *tty, struct file *file, const u8 *buf, size_t nr) { return -EOPNOTSUPP; } static struct tty_ldisc_ops null_ldisc = { .owner = THIS_MODULE, .num = N_NULL, .name = "n_null", .read = n_null_read, .write = n_null_write, }; static int __init n_null_init(void) { BUG_ON(tty_register_ldisc(&null_ldisc)); return 0; } static void __exit n_null_exit(void) { tty_unregister_ldisc(&null_ldisc); } module_init(n_null_init); module_exit(n_null_exit); MODULE_LICENSE("GPL"); MODULE_AUTHOR("Alan Cox"); MODULE_ALIAS_LDISC(N_NULL); MODULE_DESCRIPTION("Null ldisc driver"); |
524 113 173 49 60 60 5 22 20 915 66 13 112 14 483 100 483 69 385 398 35 35 134 3 1210 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 | /* SPDX-License-Identifier: GPL-2.0 */ #ifndef __NET_GENERIC_NETLINK_H #define __NET_GENERIC_NETLINK_H #include <linux/net.h> #include <net/netlink.h> #include <net/net_namespace.h> #include <uapi/linux/genetlink.h> #define GENLMSG_DEFAULT_SIZE (NLMSG_DEFAULT_SIZE - GENL_HDRLEN) /* Non-parallel generic netlink requests are serialized by a global lock. */ void genl_lock(void); void genl_unlock(void); #define MODULE_ALIAS_GENL_FAMILY(family) \ MODULE_ALIAS_NET_PF_PROTO_NAME(PF_NETLINK, NETLINK_GENERIC, "-family-" family) /* Binding to multicast group requires %CAP_NET_ADMIN */ #define GENL_MCAST_CAP_NET_ADMIN BIT(0) /* Binding to multicast group requires %CAP_SYS_ADMIN */ #define GENL_MCAST_CAP_SYS_ADMIN BIT(1) /** * struct genl_multicast_group - generic netlink multicast group * @name: name of the multicast group, names are per-family * @flags: GENL_MCAST_* flags */ struct genl_multicast_group { char name[GENL_NAMSIZ]; u8 flags; }; struct genl_split_ops; struct genl_info; /** * struct genl_family - generic netlink family * @hdrsize: length of user specific header in bytes * @name: name of family * @version: protocol version * @maxattr: maximum number of attributes supported * @policy: netlink policy * @netnsok: set to true if the family can handle network * namespaces and should be presented in all of them * @parallel_ops: operations can be called in parallel and aren't * synchronized by the core genetlink code * @pre_doit: called before an operation's doit callback, it may * do additional, common, filtering and return an error * @post_doit: called after an operation's doit callback, it may * undo operations done by pre_doit, for example release locks * @bind: called when family multicast group is added to a netlink socket * @unbind: called when family multicast group is removed from a netlink socket * @module: pointer to the owning module (set to THIS_MODULE) * @mcgrps: multicast groups used by this family * @n_mcgrps: number of multicast groups * @resv_start_op: first operation for which reserved fields of the header * can be validated and policies are required (see below); * new families should leave this field at zero * @ops: the operations supported by this family * @n_ops: number of operations supported by this family * @small_ops: the small-struct operations supported by this family * @n_small_ops: number of small-struct operations supported by this family * @split_ops: the split do/dump form of operation definition * @n_split_ops: number of entries in @split_ops, not that with split do/dump * ops the number of entries is not the same as number of commands * @sock_priv_size: the size of per-socket private memory * @sock_priv_init: the per-socket private memory initializer * @sock_priv_destroy: the per-socket private memory destructor * * Attribute policies (the combination of @policy and @maxattr fields) * can be attached at the family level or at the operation level. * If both are present the per-operation policy takes precedence. * For operations before @resv_start_op lack of policy means that the core * will perform no attribute parsing or validation. For newer operations * if policy is not provided core will reject all TLV attributes. */ struct genl_family { unsigned int hdrsize; char name[GENL_NAMSIZ]; unsigned int version; unsigned int maxattr; u8 netnsok:1; u8 parallel_ops:1; u8 n_ops; u8 n_small_ops; u8 n_split_ops; u8 n_mcgrps; u8 resv_start_op; const struct nla_policy *policy; int (*pre_doit)(const struct genl_split_ops *ops, struct sk_buff *skb, struct genl_info *info); void (*post_doit)(const struct genl_split_ops *ops, struct sk_buff *skb, struct genl_info *info); int (*bind)(int mcgrp); void (*unbind)(int mcgrp); const struct genl_ops * ops; const struct genl_small_ops *small_ops; const struct genl_split_ops *split_ops; const struct genl_multicast_group *mcgrps; struct module *module; size_t sock_priv_size; void (*sock_priv_init)(void *priv); void (*sock_priv_destroy)(void *priv); /* private: internal use only */ /* protocol family identifier */ int id; /* starting number of multicast group IDs in this family */ unsigned int mcgrp_offset; /* list of per-socket privs */ struct xarray *sock_privs; }; /** * struct genl_info - receiving information * @snd_seq: sending sequence number * @snd_portid: netlink portid of sender * @family: generic netlink family * @nlhdr: netlink message header * @genlhdr: generic netlink message header * @attrs: netlink attributes * @_net: network namespace * @ctx: storage space for the use by the family * @user_ptr: user pointers (deprecated, use ctx instead) * @extack: extended ACK report struct */ struct genl_info { u32 snd_seq; u32 snd_portid; const struct genl_family *family; const struct nlmsghdr * nlhdr; struct genlmsghdr * genlhdr; struct nlattr ** attrs; possible_net_t _net; union { u8 ctx[NETLINK_CTX_SIZE]; void * user_ptr[2]; }; struct netlink_ext_ack *extack; }; static inline struct net *genl_info_net(const struct genl_info *info) { return read_pnet(&info->_net); } static inline void genl_info_net_set(struct genl_info *info, struct net *net) { write_pnet(&info->_net, net); } static inline void *genl_info_userhdr(const struct genl_info *info) { return (u8 *)info->genlhdr + GENL_HDRLEN; } #define GENL_SET_ERR_MSG(info, msg) NL_SET_ERR_MSG((info)->extack, msg) #define GENL_SET_ERR_MSG_FMT(info, msg, args...) \ NL_SET_ERR_MSG_FMT((info)->extack, msg, ##args) /* Report that a root attribute is missing */ #define GENL_REQ_ATTR_CHECK(info, attr) ({ \ const struct genl_info *__info = (info); \ \ NL_REQ_ATTR_CHECK(__info->extack, NULL, __info->attrs, (attr)); \ }) enum genl_validate_flags { GENL_DONT_VALIDATE_STRICT = BIT(0), GENL_DONT_VALIDATE_DUMP = BIT(1), GENL_DONT_VALIDATE_DUMP_STRICT = BIT(2), }; /** * struct genl_small_ops - generic netlink operations (small version) * @cmd: command identifier * @internal_flags: flags used by the family * @flags: GENL_* flags (%GENL_ADMIN_PERM or %GENL_UNS_ADMIN_PERM) * @validate: validation flags from enum genl_validate_flags * @doit: standard command callback * @dumpit: callback for dumpers * * This is a cut-down version of struct genl_ops for users who don't need * most of the ancillary infra and want to save space. */ struct genl_small_ops { int (*doit)(struct sk_buff *skb, struct genl_info *info); int (*dumpit)(struct sk_buff *skb, struct netlink_callback *cb); u8 cmd; u8 internal_flags; u8 flags; u8 validate; }; /** * struct genl_ops - generic netlink operations * @cmd: command identifier * @internal_flags: flags used by the family * @flags: GENL_* flags (%GENL_ADMIN_PERM or %GENL_UNS_ADMIN_PERM) * @maxattr: maximum number of attributes supported * @policy: netlink policy (takes precedence over family policy) * @validate: validation flags from enum genl_validate_flags * @doit: standard command callback * @start: start callback for dumps * @dumpit: callback for dumpers * @done: completion callback for dumps */ struct genl_ops { int (*doit)(struct sk_buff *skb, struct genl_info *info); int (*start)(struct netlink_callback *cb); int (*dumpit)(struct sk_buff *skb, struct netlink_callback *cb); int (*done)(struct netlink_callback *cb); const struct nla_policy *policy; unsigned int maxattr; u8 cmd; u8 internal_flags; u8 flags; u8 validate; }; /** * struct genl_split_ops - generic netlink operations (do/dump split version) * @cmd: command identifier * @internal_flags: flags used by the family * @flags: GENL_* flags (%GENL_ADMIN_PERM or %GENL_UNS_ADMIN_PERM) * @validate: validation flags from enum genl_validate_flags * @policy: netlink policy (takes precedence over family policy) * @maxattr: maximum number of attributes supported * * Do callbacks: * @pre_doit: called before an operation's @doit callback, it may * do additional, common, filtering and return an error * @doit: standard command callback * @post_doit: called after an operation's @doit callback, it may * undo operations done by pre_doit, for example release locks * * Dump callbacks: * @start: start callback for dumps * @dumpit: callback for dumpers * @done: completion callback for dumps * * Do callbacks can be used if %GENL_CMD_CAP_DO is set in @flags. * Dump callbacks can be used if %GENL_CMD_CAP_DUMP is set in @flags. * Exactly one of those flags must be set. */ struct genl_split_ops { union { struct { int (*pre_doit)(const struct genl_split_ops *ops, struct sk_buff *skb, struct genl_info *info); int (*doit)(struct sk_buff *skb, struct genl_info *info); void (*post_doit)(const struct genl_split_ops *ops, struct sk_buff *skb, struct genl_info *info); }; struct { int (*start)(struct netlink_callback *cb); int (*dumpit)(struct sk_buff *skb, struct netlink_callback *cb); int (*done)(struct netlink_callback *cb); }; }; const struct nla_policy *policy; unsigned int maxattr; u8 cmd; u8 internal_flags; u8 flags; u8 validate; }; /** * struct genl_dumpit_info - info that is available during dumpit op call * @op: generic netlink ops - for internal genl code usage * @attrs: netlink attributes * @info: struct genl_info describing the request */ struct genl_dumpit_info { struct genl_split_ops op; struct genl_info info; }; static inline const struct genl_dumpit_info * genl_dumpit_info(struct netlink_callback *cb) { return cb->data; } static inline const struct genl_info * genl_info_dump(struct netlink_callback *cb) { return &genl_dumpit_info(cb)->info; } /** * genl_info_init_ntf() - initialize genl_info for notifications * @info: genl_info struct to set up * @family: pointer to the genetlink family * @cmd: command to be used in the notification * * Initialize a locally declared struct genl_info to pass to various APIs. * Intended to be used when creating notifications. */ static inline void genl_info_init_ntf(struct genl_info *info, const struct genl_family *family, u8 cmd) { struct genlmsghdr *hdr = (void *) &info->user_ptr[0]; memset(info, 0, sizeof(*info)); info->family = family; info->genlhdr = hdr; hdr->cmd = cmd; } static inline bool genl_info_is_ntf(const struct genl_info *info) { return !info->nlhdr; } void *__genl_sk_priv_get(struct genl_family *family, struct sock *sk); void *genl_sk_priv_get(struct genl_family *family, struct sock *sk); int genl_register_family(struct genl_family *family); int genl_unregister_family(const struct genl_family *family); void genl_notify(const struct genl_family *family, struct sk_buff *skb, struct genl_info *info, u32 group, gfp_t flags); void *genlmsg_put(struct sk_buff *skb, u32 portid, u32 seq, const struct genl_family *family, int flags, u8 cmd); static inline void * __genlmsg_iput(struct sk_buff *skb, const struct genl_info *info, int flags) { return genlmsg_put(skb, info->snd_portid, info->snd_seq, info->family, flags, info->genlhdr->cmd); } /** * genlmsg_iput - start genetlink message based on genl_info * @skb: skb in which message header will be placed * @info: genl_info as provided to do/dump handlers * * Convenience wrapper which starts a genetlink message based on * information in user request. @info should be either the struct passed * by genetlink core to do/dump handlers (when constructing replies to * such requests) or a struct initialized by genl_info_init_ntf() * when constructing notifications. * * Returns: pointer to new genetlink header. */ static inline void * genlmsg_iput(struct sk_buff *skb, const struct genl_info *info) { return __genlmsg_iput(skb, info, 0); } /** * genlmsg_nlhdr - Obtain netlink header from user specified header * @user_hdr: user header as returned from genlmsg_put() * * Returns: pointer to netlink header. */ static inline struct nlmsghdr *genlmsg_nlhdr(void *user_hdr) { return (struct nlmsghdr *)((char *)user_hdr - GENL_HDRLEN - NLMSG_HDRLEN); } /** * genlmsg_parse_deprecated - parse attributes of a genetlink message * @nlh: netlink message header * @family: genetlink message family * @tb: destination array with maxtype+1 elements * @maxtype: maximum attribute type to be expected * @policy: validation policy * @extack: extended ACK report struct */ static inline int genlmsg_parse_deprecated(const struct nlmsghdr *nlh, const struct genl_family *family, struct nlattr *tb[], int maxtype, const struct nla_policy *policy, struct netlink_ext_ack *extack) { return __nlmsg_parse(nlh, family->hdrsize + GENL_HDRLEN, tb, maxtype, policy, NL_VALIDATE_LIBERAL, extack); } /** * genlmsg_parse - parse attributes of a genetlink message * @nlh: netlink message header * @family: genetlink message family * @tb: destination array with maxtype+1 elements * @maxtype: maximum attribute type to be expected * @policy: validation policy * @extack: extended ACK report struct */ static inline int genlmsg_parse(const struct nlmsghdr *nlh, const struct genl_family *family, struct nlattr *tb[], int maxtype, const struct nla_policy *policy, struct netlink_ext_ack *extack) { return __nlmsg_parse(nlh, family->hdrsize + GENL_HDRLEN, tb, maxtype, policy, NL_VALIDATE_STRICT, extack); } /** * genl_dump_check_consistent - check if sequence is consistent and advertise if not * @cb: netlink callback structure that stores the sequence number * @user_hdr: user header as returned from genlmsg_put() * * Cf. nl_dump_check_consistent(), this just provides a wrapper to make it * simpler to use with generic netlink. */ static inline void genl_dump_check_consistent(struct netlink_callback *cb, void *user_hdr) { nl_dump_check_consistent(cb, genlmsg_nlhdr(user_hdr)); } /** * genlmsg_put_reply - Add generic netlink header to a reply message * @skb: socket buffer holding the message * @info: receiver info * @family: generic netlink family * @flags: netlink message flags * @cmd: generic netlink command * * Returns: pointer to user specific header */ static inline void *genlmsg_put_reply(struct sk_buff *skb, struct genl_info *info, const struct genl_family *family, int flags, u8 cmd) { return genlmsg_put(skb, info->snd_portid, info->snd_seq, family, flags, cmd); } /** * genlmsg_end - Finalize a generic netlink message * @skb: socket buffer the message is stored in * @hdr: user specific header */ static inline void genlmsg_end(struct sk_buff *skb, void *hdr) { nlmsg_end(skb, hdr - GENL_HDRLEN - NLMSG_HDRLEN); } /** * genlmsg_cancel - Cancel construction of a generic netlink message * @skb: socket buffer the message is stored in * @hdr: generic netlink message header */ static inline void genlmsg_cancel(struct sk_buff *skb, void *hdr) { if (hdr) nlmsg_cancel(skb, hdr - GENL_HDRLEN - NLMSG_HDRLEN); } /** * genlmsg_multicast_netns_filtered - multicast a netlink message * to a specific netns with filter * function * @family: the generic netlink family * @net: the net namespace * @skb: netlink message as socket buffer * @portid: own netlink portid to avoid sending to yourself * @group: offset of multicast group in groups array * @flags: allocation flags * @filter: filter function * @filter_data: filter function private data * * Return: 0 on success, negative error code for failure. */ static inline int genlmsg_multicast_netns_filtered(const struct genl_family *family, struct net *net, struct sk_buff *skb, u32 portid, unsigned int group, gfp_t flags, netlink_filter_fn filter, void *filter_data) { if (WARN_ON_ONCE(group >= family->n_mcgrps)) return -EINVAL; group = family->mcgrp_offset + group; return nlmsg_multicast_filtered(net->genl_sock, skb, portid, group, flags, filter, filter_data); } /** * genlmsg_multicast_netns - multicast a netlink message to a specific netns * @family: the generic netlink family * @net: the net namespace * @skb: netlink message as socket buffer * @portid: own netlink portid to avoid sending to yourself * @group: offset of multicast group in groups array * @flags: allocation flags */ static inline int genlmsg_multicast_netns(const struct genl_family *family, struct net *net, struct sk_buff *skb, u32 portid, unsigned int group, gfp_t flags) { return genlmsg_multicast_netns_filtered(family, net, skb, portid, group, flags, NULL, NULL); } /** * genlmsg_multicast - multicast a netlink message to the default netns * @family: the generic netlink family * @skb: netlink message as socket buffer * @portid: own netlink portid to avoid sending to yourself * @group: offset of multicast group in groups array * @flags: allocation flags */ static inline int genlmsg_multicast(const struct genl_family *family, struct sk_buff *skb, u32 portid, unsigned int group, gfp_t flags) { return genlmsg_multicast_netns(family, &init_net, skb, portid, group, flags); } /** * genlmsg_multicast_allns - multicast a netlink message to all net namespaces * @family: the generic netlink family * @skb: netlink message as socket buffer * @portid: own netlink portid to avoid sending to yourself * @group: offset of multicast group in groups array * * This function must hold the RTNL or rcu_read_lock(). */ int genlmsg_multicast_allns(const struct genl_family *family, struct sk_buff *skb, u32 portid, unsigned int group); /** * genlmsg_unicast - unicast a netlink message * @net: network namespace to look up @portid in * @skb: netlink message as socket buffer * @portid: netlink portid of the destination socket */ static inline int genlmsg_unicast(struct net *net, struct sk_buff *skb, u32 portid) { return nlmsg_unicast(net->genl_sock, skb, portid); } /** * genlmsg_reply - reply to a request * @skb: netlink message to be sent back * @info: receiver information */ static inline int genlmsg_reply(struct sk_buff *skb, struct genl_info *info) { return genlmsg_unicast(genl_info_net(info), skb, info->snd_portid); } /** * genlmsg_data - head of message payload * @gnlh: genetlink message header */ static inline void *genlmsg_data(const struct genlmsghdr *gnlh) { return ((unsigned char *) gnlh + GENL_HDRLEN); } /** * genlmsg_len - length of message payload * @gnlh: genetlink message header */ static inline int genlmsg_len(const struct genlmsghdr *gnlh) { struct nlmsghdr *nlh = (struct nlmsghdr *)((unsigned char *)gnlh - NLMSG_HDRLEN); return (nlh->nlmsg_len - GENL_HDRLEN - NLMSG_HDRLEN); } /** * genlmsg_msg_size - length of genetlink message not including padding * @payload: length of message payload */ static inline int genlmsg_msg_size(int payload) { return GENL_HDRLEN + payload; } /** * genlmsg_total_size - length of genetlink message including padding * @payload: length of message payload */ static inline int genlmsg_total_size(int payload) { return NLMSG_ALIGN(genlmsg_msg_size(payload)); } /** * genlmsg_new - Allocate a new generic netlink message * @payload: size of the message payload * @flags: the type of memory to allocate. */ static inline struct sk_buff *genlmsg_new(size_t payload, gfp_t flags) { return nlmsg_new(genlmsg_total_size(payload), flags); } /** * genl_set_err - report error to genetlink broadcast listeners * @family: the generic netlink family * @net: the network namespace to report the error to * @portid: the PORTID of a process that we want to skip (if any) * @group: the broadcast group that will notice the error * (this is the offset of the multicast group in the groups array) * @code: error code, must be negative (as usual in kernelspace) * * This function returns the number of broadcast listeners that have set the * NETLINK_RECV_NO_ENOBUFS socket option. */ static inline int genl_set_err(const struct genl_family *family, struct net *net, u32 portid, u32 group, int code) { if (WARN_ON_ONCE(group >= family->n_mcgrps)) return -EINVAL; group = family->mcgrp_offset + group; return netlink_set_err(net->genl_sock, portid, group, code); } static inline int genl_has_listeners(const struct genl_family *family, struct net *net, unsigned int group) { if (WARN_ON_ONCE(group >= family->n_mcgrps)) return -EINVAL; group = family->mcgrp_offset + group; return netlink_has_listeners(net->genl_sock, group); } #endif /* __NET_GENERIC_NETLINK_H */ |
132 21 19 20 1 2 20 1127 1695 11 1699 1700 435 17 5 12 6 11 17 233 188 185 165 166 48 1 47 48 75 48 6 811 811 810 809 798 799 67 811 811 811 811 811 811 1127 972 1147 131 721 811 811 758 438 8 550 30 21 32 2 10 10 10 2 9 10 8 4 4 4 4 4 4 4 4 41 41 623 623 4 1 4 4 4 4 4 4 6 2 2 1 244 25 63 53 36 115 10 139 664 436 437 437 664 665 665 663 665 665 663 663 663 54 623 624 624 623 625 54 54 54 54 54 46 4 46 1563 72 1563 1564 1559 1562 1537 78 510 509 78 78 9 78 663 664 664 5 5 664 662 661 664 664 661 1712 244 1092 223 223 223 212 213 2 11 11 1 11 510 1216 1216 11 201 1217 500 1562 420 420 349 421 379 421 381 421 1564 428 427 429 429 429 1567 22 1572 1573 48 2 2 2 1 1 48 143 570 61 19 30 29 24 685 955 959 959 947 806 957 945 947 948 809 958 36 8 958 128 102 959 807 958 807 960 960 948 959 806 960 796 960 960 959 960 960 1327 1322 1326 1325 997 998 998 998 1327 996 998 996 997 1324 26 24 92 92 104 56 9 1572 113 1526 1526 1571 1569 1571 1571 1571 87 166 1695 166 82 81 1693 1693 94 78 1695 1693 1693 2 1693 1692 1693 1690 1683 8 8 8 6 6 6 6 9 9 1684 1694 1691 1690 1695 104 2 1694 6 96 6 102 1683 1687 164 164 633 17 633 1685 1685 1685 1015 991 31 1685 1240 19 1661 1656 633 632 3 1687 1689 1679 41 38 1643 84 1508 554 489 489 488 490 481 489 553 490 1509 1557 1374 1559 887 877 1551 1544 406 1542 1536 606 607 1325 607 546 1284 1121 420 199 501 33 18 10 1136 401 1504 333 1558 3 1558 1516 11 1514 1508 1505 1560 1550 2 1550 1552 1148 782 688 1570 135 136 107 107 91 91 1130 126 126 126 1129 1130 1131 1130 1129 801 217 801 797 347 777 52 52 52 215 826 14 13 13 254 775 254 800 17 100 32 801 215 13 783 91 27 61 61 61 61 7 7 800 801 91 77 91 27 91 81 81 81 17 801 800 799 801 624 603 603 575 174 143 466 192 50 49 567 306 189 158 551 222 222 100 30 522 318 13 13 13 13 13 13 13 1 13 316 316 112 112 292 14 14 14 14 14 14 2 287 182 148 275 225 798 36 4 36 32 32 32 32 32 4 32 24 24 21 2 545 545 545 545 496 545 382 319 310 799 847 825 798 795 826 800 61 799 848 800 801 801 22 801 801 795 218 215 215 215 215 43 214 181 215 215 30 30 30 28 30 28 214 32 215 215 14 320 201 51 618 397 399 178 394 162 710 571 5 571 162 734 596 549 598 598 597 116 289 485 623 625 624 24 6 1 502 624 13 1 24 24 24 624 623 624 471 469 472 278 145 133 51 51 1 50 50 198 135 135 198 39 198 190 25 198 597 12 3 3 9 12 12 12 12 12 12 6 6 12 7 12 12 12 2 35 35 35 1 35 1 34 35 117 86 119 77 1 77 151 73 53 44 20 20 17 9 9 9 20 11 1 20 20 73 43 43 9 9 45 45 31 45 35 106 1387 3 2 151 38 137 151 151 3 623 622 624 12 22 11 14 1 1 23 3 82 83 3 1 1 1 51 51 51 2 49 49 51 933 1 933 1 932 1 933 868 814 46 45 46 14 14 2 12 2 10 10 31 1 32 32 1 32 31 522 521 418 419 418 418 110 5 4 2 23 23 23 23 18 5 1370 49 47 49 47 47 48 33 4 4 1 1108 650 1111 643 1 1110 33 1111 1138 1158 1157 1158 1156 6 795 6 1151 1 1156 1157 1154 1158 62 62 62 62 38 62 62 1 1152 1154 21 1156 60 59 60 1156 1138 1156 1158 1139 1158 1158 1156 1158 1157 1156 1152 6 5 1 4 5 4 4 5 168 175 383 56 383 187 382 16 27 10 41 21 1 20 2 18 21 33 17 2 15 3 12 7 28 19 21 15 7 8 15 19 8 11 12 21 34 32 29 2 2 1 1 1 1 625 18 625 624 624 623 12 12 12 625 625 625 5 3 2 625 1109 1110 1111 1108 1106 12 451 12 12 11 11 11 9 9 5 5 5 4 4 1099 5 5 4 3 1 4 3 1 1094 522 76 73 1 3 522 523 1 110 419 280 282 140 10 138 384 681 625 624 151 624 67 32 1 10 6 31 31 31 35 5 1 5 1 1 30 23 23 18 7 5 2 2 1 1 8 1 4 2 1 457 393 384 15 553 551 552 548 478 478 588 587 586 585 545 47 1 46 46 45 11 11 35 1 44 43 24 13 24 43 24 41 41 5 4 1 1 1 1 54 47 5 2 1 373 32 1 372 370 367 3 3 282 151 122 122 122 151 372 251 249 248 248 11 11 2 244 243 16 242 4 4 4 3 2 1 247 379 1 11 10 1 1 379 5 4 1 268 269 1 269 1 269 1 269 269 263 2 2 2 2 2 2 1 1 1 1 1 8 8 7 8 6 1 4 5 4 2 2 21 21 17 21 21 21 15 17 17 17 15 17 17 17 17 17 17 15 15 15 17 866 291 590 589 19 19 1 4 610 610 137 610 33 609 278 610 40 608 610 27 10 27 610 589 589 42 40 588 590 25 25 1 25 24 24 24 24 24 1 24 25 25 25 25 25 25 24 24 24 3 3 3 678 678 454 613 3 2 611 19 611 610 4 609 590 590 589 2 2 1 2 40 40 40 588 41 1 41 588 270 272 108 108 107 25 22 18 17 17 17 2 2 1 1 1 1 278 14 14 13 12 109 109 179 179 8 50 1 50 25 1 5 5 5 5 1 1 21 66 261 23 3 1 1 1 78 1 1 4 17 29 4 2 3 3 3 4 4 7 5 1 12 3 104 22 1 1 1 1 14 17 4 9 1 11 6 1 20 214 866 5 865 15 867 10 864 291 867 291 867 867 867 867 865 867 863 865 9 866 3 864 15 867 867 867 61 27 12 4 46 46 46 40 40 40 75 75 75 74 6 75 75 75 75 74 75 75 75 75 75 75 75 50 75 75 2 2 277 73 1 72 72 72 72 48 72 72 706 58 29 706 808 754 747 742 736 737 726 163 1 128 15 14 2 161 162 7 840 23 22 21 21 20 20 20 1 1 2 1240 1242 1242 1237 2 2 2 2 1243 1243 2 1239 1233 1234 1243 23 22 22 21 1 4 4 4 1 1286 949 949 948 905 905 904 1 904 34 871 2 870 2 868 1 867 62 866 867 867 867 867 865 838 867 291 822 840 819 120 818 40 40 40 40 40 819 22 19 2 23 4 4 4 22 810 809 706 61 97 97 68 16 3 19 97 18 1 1 50 1 1 1 1 1 798 131 811 811 811 54 54 811 811 724 10 10 10 719 11 754 4 3 792 61 613 614 1 614 1 612 1 615 1 9 8 8 2 6 2 4 8 6 2 6 707 19 19 19 17 9 19 613 613 9 612 9 9 9 4 9 9 7 5 7 7 7 2 7 2 2 1 1 1 25 1573 4 4 1 1 1 1 4 2 22 1 610 609 24 23 23 621 621 623 622 622 17 20 2 617 1 617 1 3 3 615 613 613 39 612 3 2 609 596 51 52 52 51 1 52 20 37 37 1 36 19 34 34 34 6 34 25 10 1 1 611 609 609 609 1 189 178 72 188 166 113 189 178 52 189 166 110 86 78 54 86 37 86 81 33 86 43 86 16 40 51 21 21 21 51 19 38 51 50 50 47 51 3 10 8 8 2 8 8 17 24 7 7 17 7 2 7 6 6 2 6 13 55 67 46 3 3 64 10 53 11 11 61 54 64 64 8 6 1 12 122 116 235 233 232 302 98 302 43 272 224 178 189 189 189 86 86 51 51 51 10 10 10 24 24 24 7 7 14 67 3 64 14 8 6 13 1 12 267 105 271 732 11 733 734 732 164 26 10 10 1 20 20 158 74 97 5 718 612 96 302 302 293 145 93 93 59 56 289 5 998 996 26 26 25 2 16 16 1 39 38 38 992 949 774 107 721 2 1 1 1 718 767 4 769 856 946 946 766 714 715 710 711 29 28 26 23 26 26 3 105 105 50 26 25 26 26 24 15 24 24 24 55 105 857 739 130 738 738 167 2 4 164 723 5 834 733 36 1 715 108 89 90 1 732 733 999 47 25 40 31 9 36 375 373 115 69 97 85 85 69 68 69 68 85 39 68 48 48 19 19 18 18 74 74 73 69 80 76 141 125 68 50 31 27 97 49 14 7 87 73 18 18 48 48 29 31 18 4 3 6 6 467 50 534 237 121 271 233 25 25 22 375 250 220 251 120 54 79 120 120 68 18 51 1 51 1 51 4 26 67 2 68 14 68 3 33 33 19 31 4 2 33 33 19 133 47 113 132 47 113 138 39 103 138 39 103 251 418 416 251 251 250 248 48 48 48 48 48 48 48 48 48 48 48 46 46 48 240 210 56 56 4 17 23 8 19 19 54 54 12 10 33 5 7 4 55 55 5 5 42 23 5 24 60 61 5 3 27 48 4 5 4 12 136 136 136 135 1 135 136 136 135 136 136 46 42 136 135 136 30 136 108 108 9 107 9 9 9 135 542 536 544 24 23 22 3 21 21 17 543 537 294 1 296 295 296 264 1 458 457 538 538 316 296 149 130 121 295 300 224 17 224 49 233 222 9 417 49 418 130 417 107 415 417 240 344 416 417 240 48 48 418 129 130 418 210 73 3 3 3 2 2 1 2 2 269 79 48 323 373 470 72 1111 1109 3 1107 1106 1102 423 1 1 1 423 105 1 104 366 366 13 13 13 1 12 12 11 6 2 1539 10 10 10 10 10 10 10 1233 1233 1233 1231 1233 1233 3 1231 1232 1233 24 22 23 23 21 20 19 18 7 17 4 62 4 308 52 26 277 1 275 277 17 17 263 4 3 71 3 1597 23 1066 4 142 1598 1596 1599 619 1599 1597 1594 1594 1595 507 152 1 496 498 972 973 970 973 86 30 29 30 954 934 955 905 13 905 704 13 84 953 954 903 883 884 3 1 1 1 1 1 1 882 1 1 1 1573 1573 1573 483 1573 1573 955 1593 1138 2 1596 1599 1598 972 5 971 4 5 951 914 3 950 71 85 971 98 98 81 16 99 99 568 23 569 568 1595 1597 1593 1597 1596 1594 1599 1594 1597 1 1594 1596 1593 1595 2 1596 1137 1 1598 1595 1597 1687 94 2 2 83 1 18 20 19 2 18 18 17 2 1 15 14 1 3 1 13 13 4 11 11 11 8 3 18 2 8 7 8 8 8 8 8 1134 5 5 25 19 18 17 17 17 17 17 2 1 15 14 14 4 14 1 13 12 1 12 12 10 2 2 10 6 6 11 25 2 2 2 2 2 1 1 1 1686 1669 43 5 38 38 38 1638 1622 26 25 12 172 170 167 168 167 168 172 167 165 221 222 221 98 165 165 113 99 124 131 131 211 212 212 213 128 123 123 212 211 213 1092 242 242 213 213 213 213 213 112 112 110 217 213 218 209 1 209 207 189 190 147 146 148 136 131 17 17 17 17 17 55 56 55 56 56 45 86 24 38 24 53 32 38 205 127 16 62 37 125 62 61 55 53 53 34 33 124 1 37 37 36 35 218 217 218 218 218 218 218 8 8 8 216 216 218 205 223 218 218 217 7 216 218 218 218 217 218 201 194 194 200 201 201 201 201 194 177 200 125 125 125 201 201 201 201 173 128 128 16 16 2 127 201 125 43 40 18 18 1 1 18 200 128 201 130 95 125 131 11 11 10 10 1094 1048 1048 1094 690 1093 243 195 243 237 131 131 131 19 12 11 130 130 12 11 11 7 7 118 118 213 213 213 213 200 9 201 201 201 187 201 175 183 183 90 90 90 90 194 195 194 1091 560 560 546 559 560 559 559 558 558 560 559 559 560 393 3 164 3 994 995 164 1 1 996 1571 1545 381 1570 1573 1094 559 559 201 37 32 19 1572 1086 1573 1570 427 1572 38 38 1571 289 290 289 1572 4 1571 1573 1571 997 1568 1539 608 54 44 588 2 588 1524 172 172 1 171 170 163 1523 1429 925 926 926 76 924 3 922 923 75 904 905 814 1310 85 85 84 70 14 3 69 13 1303 1244 1244 1242 1241 5 1241 1238 73 1230 1215 1218 1216 1156 380 544 1121 13 1108 1111 1105 1107 1542 1 1 1 1 2 1 1 1 4 3 2 2 2 2 416 2 440 440 3 2 438 438 22 416 19 1 19 438 3 3 3 3 1 440 440 440 50 440 437 22 436 446 440 14 439 1631 1626 1622 1631 24 552 2 1630 1170 1169 1170 2 1629 1168 459 6 6 454 2 452 385 4 448 3 1 2 2 445 446 440 436 433 112 1 111 1 110 2 107 1161 2 554 1383 1682 321 321 321 321 321 421 422 320 321 321 320 321 321 321 422 422 1 421 421 422 321 422 18 18 18 18 18 18 1 18 1 1 17 124 124 124 24 124 124 123 1 1 124 23 124 124 124 124 1 124 124 124 124 124 7 7 2 1148 1149 1147 1149 1149 411 61 392 1 61 1146 1148 1145 1149 61 1147 61 61 1148 1146 1146 1148 124 124 124 1143 1138 1145 1118 50 48 37 11 5 9 441 2 2 2 11 1144 11 1144 1154 1149 1151 7 1156 316 315 9 9 9 9 316 316 1156 1154 1151 1152 415 1151 545 8 8 823 352 326 272 325 288 60 8 61 327 321 96 320 54 24 30 24 49 3 4 5 320 319 59 1143 59 59 59 1144 59 59 59 59 59 59 58 2 2 2 2 59 1 59 59 59 29 29 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 59 1148 1142 1145 732 1149 1147 1147 1149 1148 1149 1148 1148 22 2 2 2 1147 2 1149 1148 1148 4 3 7 7 7 7 6 1 5 2 7 7 1149 1149 10 10 1147 99 2 2 2 2 2 2 1 1 1 2 1147 9 9 1 9 1147 743 732 732 731 1 732 80 730 731 21 21 21 19 19 17 17 16 16 4 3 3 721 721 721 720 721 673 667 664 659 649 649 129 129 64 34 26 26 26 31 38 23 6 3 5 17 11 644 10 635 29 29 41 618 10 12 618 618 9 9 616 616 616 1147 1148 3 1147 1148 1145 1149 1146 9 9 9 9 9 1148 16 16 16 16 1148 726 1154 1153 1154 726 1156 1154 1153 1156 1157 1156 1156 1157 1157 1540 9 1540 1538 1539 523 1539 1570 1541 1572 1571 1567 1570 5 5 1572 1573 1573 1535 1536 1539 1537 1082 1541 1156 1156 5 5 1572 1157 248 241 42 248 1683 2 2 1 1 2 16 13 2 2 1 1 1 15 25 2 23 1595 1572 1 1544 3 1539 2 1540 16 1 10 3 3 1 2 1710 3 1716 5 1 2 4 2 3 3 1572 1572 1573 1572 587 547 392 1122 1114 553 176 624 57 12 39 1014 965 769 178 1569 954 904 562 817 1571 1570 1565 1572 1573 1571 1573 1573 1573 1571 1569 1569 1570 1572 1569 1571 1569 144 145 144 145 144 145 145 145 81 1571 1570 1721 1720 1718 1719 1721 1722 1722 1722 34 1686 1717 1710 1707 1710 1678 1710 1711 1712 1709 1712 1691 1685 1649 1630 1596 4 1595 1598 1573 1573 1572 1158 4 1684 1158 1156 1156 1149 1149 1146 1149 7 1156 1155 1147 1149 1145 1142 1149 1684 1683 1684 1654 1136 321 1096 1135 1136 321 1134 1684 1383 1683 1682 1684 6 1686 34 1689 1691 1687 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497 2498 2499 2500 2501 2502 2503 2504 2505 2506 2507 2508 2509 2510 2511 2512 2513 2514 2515 2516 2517 2518 2519 2520 2521 2522 2523 2524 2525 2526 2527 2528 2529 2530 2531 2532 2533 2534 2535 2536 2537 2538 2539 2540 2541 2542 2543 2544 2545 2546 2547 2548 2549 2550 2551 2552 2553 2554 2555 2556 2557 2558 2559 2560 2561 2562 2563 2564 2565 2566 2567 2568 2569 2570 2571 2572 2573 2574 2575 2576 2577 2578 2579 2580 2581 2582 2583 2584 2585 2586 2587 2588 2589 2590 2591 2592 2593 2594 2595 2596 2597 2598 2599 2600 2601 2602 2603 2604 2605 2606 2607 2608 2609 2610 2611 2612 2613 2614 2615 2616 2617 2618 2619 2620 2621 2622 2623 2624 2625 2626 2627 2628 2629 2630 2631 2632 2633 2634 2635 2636 2637 2638 2639 2640 2641 2642 2643 2644 2645 2646 2647 2648 2649 2650 2651 2652 2653 2654 2655 2656 2657 2658 2659 2660 2661 2662 2663 2664 2665 2666 2667 2668 2669 2670 2671 2672 2673 2674 2675 2676 2677 2678 2679 2680 2681 2682 2683 2684 2685 2686 2687 2688 2689 2690 2691 2692 2693 2694 2695 2696 2697 2698 2699 2700 2701 2702 2703 2704 2705 2706 2707 2708 2709 2710 2711 2712 2713 2714 2715 2716 2717 2718 2719 2720 2721 2722 2723 2724 2725 2726 2727 2728 2729 2730 2731 2732 2733 2734 2735 2736 2737 2738 2739 2740 2741 2742 2743 2744 2745 2746 2747 2748 2749 2750 2751 2752 2753 2754 2755 2756 2757 2758 2759 2760 2761 2762 2763 2764 2765 2766 2767 2768 2769 2770 2771 2772 2773 2774 2775 2776 2777 2778 2779 2780 2781 2782 2783 2784 2785 2786 2787 2788 2789 2790 2791 2792 2793 2794 2795 2796 2797 2798 2799 2800 2801 2802 2803 2804 2805 2806 2807 2808 2809 2810 2811 2812 2813 2814 2815 2816 2817 2818 2819 2820 2821 2822 2823 2824 2825 2826 2827 2828 2829 2830 2831 2832 2833 2834 2835 2836 2837 2838 2839 2840 2841 2842 2843 2844 2845 2846 2847 2848 2849 2850 2851 2852 2853 2854 2855 2856 2857 2858 2859 2860 2861 2862 2863 2864 2865 2866 2867 2868 2869 2870 2871 2872 2873 2874 2875 2876 2877 2878 2879 2880 2881 2882 2883 2884 2885 2886 2887 2888 2889 2890 2891 2892 2893 2894 2895 2896 2897 2898 2899 2900 2901 2902 2903 2904 2905 2906 2907 2908 2909 2910 2911 2912 2913 2914 2915 2916 2917 2918 2919 2920 2921 2922 2923 2924 2925 2926 2927 2928 2929 2930 2931 2932 2933 2934 2935 2936 2937 2938 2939 2940 2941 2942 2943 2944 2945 2946 2947 2948 2949 2950 2951 2952 2953 2954 2955 2956 2957 2958 2959 2960 2961 2962 2963 2964 2965 2966 2967 2968 2969 2970 2971 2972 2973 2974 2975 2976 2977 2978 2979 2980 2981 2982 2983 2984 2985 2986 2987 2988 2989 2990 2991 2992 2993 2994 2995 2996 2997 2998 2999 3000 3001 3002 3003 3004 3005 3006 3007 3008 3009 3010 3011 3012 3013 3014 3015 3016 3017 3018 3019 3020 3021 3022 3023 3024 3025 3026 3027 3028 3029 3030 3031 3032 3033 3034 3035 3036 3037 3038 3039 3040 3041 3042 3043 3044 3045 3046 3047 3048 3049 3050 3051 3052 3053 3054 3055 3056 3057 3058 3059 3060 3061 3062 3063 3064 3065 3066 3067 3068 3069 3070 3071 3072 3073 3074 3075 3076 3077 3078 3079 3080 3081 3082 3083 3084 3085 3086 3087 3088 3089 3090 3091 3092 3093 3094 3095 3096 3097 3098 3099 3100 3101 3102 3103 3104 3105 3106 3107 3108 3109 3110 3111 3112 3113 3114 3115 3116 3117 3118 3119 3120 3121 3122 3123 3124 3125 3126 3127 3128 3129 3130 3131 3132 3133 3134 3135 3136 3137 3138 3139 3140 3141 3142 3143 3144 3145 3146 3147 3148 3149 3150 3151 3152 3153 3154 3155 3156 3157 3158 3159 3160 3161 3162 3163 3164 3165 3166 3167 3168 3169 3170 3171 3172 3173 3174 3175 3176 3177 3178 3179 3180 3181 3182 3183 3184 3185 3186 3187 3188 3189 3190 3191 3192 3193 3194 3195 3196 3197 3198 3199 3200 3201 3202 3203 3204 3205 3206 3207 3208 3209 3210 3211 3212 3213 3214 3215 3216 3217 3218 3219 3220 3221 3222 3223 3224 3225 3226 3227 3228 3229 3230 3231 3232 3233 3234 3235 3236 3237 3238 3239 3240 3241 3242 3243 3244 3245 3246 3247 3248 3249 3250 3251 3252 3253 3254 3255 3256 3257 3258 3259 3260 3261 3262 3263 3264 3265 3266 3267 3268 3269 3270 3271 3272 3273 3274 3275 3276 3277 3278 3279 3280 3281 3282 3283 3284 3285 3286 3287 3288 3289 3290 3291 3292 3293 3294 3295 3296 3297 3298 3299 3300 3301 3302 3303 3304 3305 3306 3307 3308 3309 3310 3311 3312 3313 3314 3315 3316 3317 3318 3319 3320 3321 3322 3323 3324 3325 3326 3327 3328 3329 3330 3331 3332 3333 3334 3335 3336 3337 3338 3339 3340 3341 3342 3343 3344 3345 3346 3347 3348 3349 3350 3351 3352 3353 3354 3355 3356 3357 3358 3359 3360 3361 3362 3363 3364 3365 3366 3367 3368 3369 3370 3371 3372 3373 3374 3375 3376 3377 3378 3379 3380 3381 3382 3383 3384 3385 3386 3387 3388 3389 3390 3391 3392 3393 3394 3395 3396 3397 3398 3399 3400 3401 3402 3403 3404 3405 3406 3407 3408 3409 3410 3411 3412 3413 3414 3415 3416 3417 3418 3419 3420 3421 3422 3423 3424 3425 3426 3427 3428 3429 3430 3431 3432 3433 3434 3435 3436 3437 3438 3439 3440 3441 3442 3443 3444 3445 3446 3447 3448 3449 3450 3451 3452 3453 3454 3455 3456 3457 3458 3459 3460 3461 3462 3463 3464 3465 3466 3467 3468 3469 3470 3471 3472 3473 3474 3475 3476 3477 3478 3479 3480 3481 3482 3483 3484 3485 3486 3487 3488 3489 3490 3491 3492 3493 3494 3495 3496 3497 3498 3499 3500 3501 3502 3503 3504 3505 3506 3507 3508 3509 3510 3511 3512 3513 3514 3515 3516 3517 3518 3519 3520 3521 3522 3523 3524 3525 3526 3527 3528 3529 3530 3531 3532 3533 3534 3535 3536 3537 3538 3539 3540 3541 3542 3543 3544 3545 3546 3547 3548 3549 3550 3551 3552 3553 3554 3555 3556 3557 3558 3559 3560 3561 3562 3563 3564 3565 3566 3567 3568 3569 3570 3571 3572 3573 3574 3575 3576 3577 3578 3579 3580 3581 3582 3583 3584 3585 3586 3587 3588 3589 3590 3591 3592 3593 3594 3595 3596 3597 3598 3599 3600 3601 3602 3603 3604 3605 3606 3607 3608 3609 3610 3611 3612 3613 3614 3615 3616 3617 3618 3619 3620 3621 3622 3623 3624 3625 3626 3627 3628 3629 3630 3631 3632 3633 3634 3635 3636 3637 3638 3639 3640 3641 3642 3643 3644 3645 3646 3647 3648 3649 3650 3651 3652 3653 3654 3655 3656 3657 3658 3659 3660 3661 3662 3663 3664 3665 3666 3667 3668 3669 3670 3671 3672 3673 3674 3675 3676 3677 3678 3679 3680 3681 3682 3683 3684 3685 3686 3687 3688 3689 3690 3691 3692 3693 3694 3695 3696 3697 3698 3699 3700 3701 3702 3703 3704 3705 3706 3707 3708 3709 3710 3711 3712 3713 3714 3715 3716 3717 3718 3719 3720 3721 3722 3723 3724 3725 3726 3727 3728 3729 3730 3731 3732 3733 3734 3735 3736 3737 3738 3739 3740 3741 3742 3743 3744 3745 3746 3747 3748 3749 3750 3751 3752 3753 3754 3755 3756 3757 3758 3759 3760 3761 3762 3763 3764 3765 3766 3767 3768 3769 3770 3771 3772 3773 3774 3775 3776 3777 3778 3779 3780 3781 3782 3783 3784 3785 3786 3787 3788 3789 3790 3791 3792 3793 3794 3795 3796 3797 3798 3799 3800 3801 3802 3803 3804 3805 3806 3807 3808 3809 3810 3811 3812 3813 3814 3815 3816 3817 3818 3819 3820 3821 3822 3823 3824 3825 3826 3827 3828 3829 3830 3831 3832 3833 3834 3835 3836 3837 3838 3839 3840 3841 3842 3843 3844 3845 3846 3847 3848 3849 3850 3851 3852 3853 3854 3855 3856 3857 3858 3859 3860 3861 3862 3863 3864 3865 3866 3867 3868 3869 3870 3871 3872 3873 3874 3875 3876 3877 3878 3879 3880 3881 3882 3883 3884 3885 3886 3887 3888 3889 3890 3891 3892 3893 3894 3895 3896 3897 3898 3899 3900 3901 3902 3903 3904 3905 3906 3907 3908 3909 3910 3911 3912 3913 3914 3915 3916 3917 3918 3919 3920 3921 3922 3923 3924 3925 3926 3927 3928 3929 3930 3931 3932 3933 3934 3935 3936 3937 3938 3939 3940 3941 3942 3943 3944 3945 3946 3947 3948 3949 3950 3951 3952 3953 3954 3955 3956 3957 3958 3959 3960 3961 3962 3963 3964 3965 3966 3967 3968 3969 3970 3971 3972 3973 3974 3975 3976 3977 3978 3979 3980 3981 3982 3983 3984 3985 3986 3987 3988 3989 3990 3991 3992 3993 3994 3995 3996 3997 3998 3999 4000 4001 4002 4003 4004 4005 4006 4007 4008 4009 4010 4011 4012 4013 4014 4015 4016 4017 4018 4019 4020 4021 4022 4023 4024 4025 4026 4027 4028 4029 4030 4031 4032 4033 4034 4035 4036 4037 4038 4039 4040 4041 4042 4043 4044 4045 4046 4047 4048 4049 4050 4051 4052 4053 4054 4055 4056 4057 4058 4059 4060 4061 4062 4063 4064 4065 4066 4067 4068 4069 4070 4071 4072 4073 4074 4075 4076 4077 4078 4079 4080 4081 4082 4083 4084 4085 4086 4087 4088 4089 4090 4091 4092 4093 4094 4095 4096 4097 4098 4099 4100 4101 4102 4103 4104 4105 4106 4107 4108 4109 4110 4111 4112 4113 4114 4115 4116 4117 4118 4119 4120 4121 4122 4123 4124 4125 4126 4127 4128 4129 4130 4131 4132 4133 4134 4135 4136 4137 4138 4139 4140 4141 4142 4143 4144 4145 4146 4147 4148 4149 4150 4151 4152 4153 4154 4155 4156 4157 4158 4159 4160 4161 4162 4163 4164 4165 4166 4167 4168 4169 4170 4171 4172 4173 4174 4175 4176 4177 4178 4179 4180 4181 4182 4183 4 |