Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.11/site-packages/IPython/core/completer.py: 20%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1366 statements  

1"""Completion for IPython. 

2 

3This module started as fork of the rlcompleter module in the Python standard 

4library. The original enhancements made to rlcompleter have been sent 

5upstream and were accepted as of Python 2.3, 

6 

7This module now support a wide variety of completion mechanism both available 

8for normal classic Python code, as well as completer for IPython specific 

9Syntax like magics. 

10 

11Latex and Unicode completion 

12============================ 

13 

14IPython and compatible frontends not only can complete your code, but can help 

15you to input a wide range of characters. In particular we allow you to insert 

16a unicode character using the tab completion mechanism. 

17 

18Forward latex/unicode completion 

19-------------------------------- 

20 

21Forward completion allows you to easily type a unicode character using its latex 

22name, or unicode long description. To do so type a backslash follow by the 

23relevant name and press tab: 

24 

25 

26Using latex completion: 

27 

28.. code:: 

29 

30 \\alpha<tab> 

31 α 

32 

33or using unicode completion: 

34 

35 

36.. code:: 

37 

38 \\GREEK SMALL LETTER ALPHA<tab> 

39 α 

40 

41 

42Only valid Python identifiers will complete. Combining characters (like arrow or 

43dots) are also available, unlike latex they need to be put after the their 

44counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``. 

45 

46Some browsers are known to display combining characters incorrectly. 

47 

48Backward latex completion 

49------------------------- 

50 

51It is sometime challenging to know how to type a character, if you are using 

52IPython, or any compatible frontend you can prepend backslash to the character 

53and press :kbd:`Tab` to expand it to its latex form. 

54 

55.. code:: 

56 

57 \\α<tab> 

58 \\alpha 

59 

60 

61Both forward and backward completions can be deactivated by setting the 

62:std:configtrait:`Completer.backslash_combining_completions` option to 

63``False``. 

64 

65 

66Experimental 

67============ 

68 

69Starting with IPython 6.0, this module can make use of the Jedi library to 

70generate completions both using static analysis of the code, and dynamically 

71inspecting multiple namespaces. Jedi is an autocompletion and static analysis 

72for Python. The APIs attached to this new mechanism is unstable and will 

73raise unless use in an :any:`provisionalcompleter` context manager. 

74 

75You will find that the following are experimental: 

76 

77 - :any:`provisionalcompleter` 

78 - :any:`IPCompleter.completions` 

79 - :any:`Completion` 

80 - :any:`rectify_completions` 

81 

82.. note:: 

83 

84 better name for :any:`rectify_completions` ? 

85 

86We welcome any feedback on these new API, and we also encourage you to try this 

87module in debug mode (start IPython with ``--Completer.debug=True``) in order 

88to have extra logging information if :any:`jedi` is crashing, or if current 

89IPython completer pending deprecations are returning results not yet handled 

90by :any:`jedi` 

91 

92Using Jedi for tab completion allow snippets like the following to work without 

93having to execute any code: 

94 

95 >>> myvar = ['hello', 42] 

96 ... myvar[1].bi<tab> 

97 

98Tab completion will be able to infer that ``myvar[1]`` is a real number without 

99executing almost any code unlike the deprecated :any:`IPCompleter.greedy` 

100option. 

101 

102Be sure to update :any:`jedi` to the latest stable version or to try the 

103current development version to get better completions. 

104 

105Matchers 

106======== 

107 

108All completions routines are implemented using unified *Matchers* API. 

109The matchers API is provisional and subject to change without notice. 

110 

111The built-in matchers include: 

112 

113- :any:`IPCompleter.dict_key_matcher`: dictionary key completions, 

114- :any:`IPCompleter.magic_matcher`: completions for magics, 

115- :any:`IPCompleter.unicode_name_matcher`, 

116 :any:`IPCompleter.fwd_unicode_matcher` 

117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_, 

118- :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_, 

119- :any:`IPCompleter.file_matcher`: paths to files and directories, 

120- :any:`IPCompleter.python_func_kw_matcher` - function keywords, 

121- :any:`IPCompleter.python_matches` - globals and attributes (v1 API), 

122- ``IPCompleter.jedi_matcher`` - static analysis with Jedi, 

123- :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default 

124 implementation in :any:`InteractiveShell` which uses IPython hooks system 

125 (`complete_command`) with string dispatch (including regular expressions). 

126 Differently to other matchers, ``custom_completer_matcher`` will not suppress 

127 Jedi results to match behaviour in earlier IPython versions. 

128 

129Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list. 

130 

131Matcher API 

132----------- 

133 

134Simplifying some details, the ``Matcher`` interface can described as 

135 

136.. code-block:: 

137 

138 MatcherAPIv1 = Callable[[str], list[str]] 

139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult] 

140 

141 Matcher = MatcherAPIv1 | MatcherAPIv2 

142 

143The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0 

144and remains supported as a simplest way for generating completions. This is also 

145currently the only API supported by the IPython hooks system `complete_command`. 

146 

147To distinguish between matcher versions ``matcher_api_version`` attribute is used. 

148More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers, 

149and requires a literal ``2`` for v2 Matchers. 

150 

151Once the API stabilises future versions may relax the requirement for specifying 

152``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore 

153please do not rely on the presence of ``matcher_api_version`` for any purposes. 

154 

155Suppression of competing matchers 

156--------------------------------- 

157 

158By default results from all matchers are combined, in the order determined by 

159their priority. Matchers can request to suppress results from subsequent 

160matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``. 

161 

162When multiple matchers simultaneously request suppression, the results from of 

163the matcher with higher priority will be returned. 

164 

165Sometimes it is desirable to suppress most but not all other matchers; 

166this can be achieved by adding a set of identifiers of matchers which 

167should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key. 

168 

169The suppression behaviour can is user-configurable via 

170:std:configtrait:`IPCompleter.suppress_competing_matchers`. 

171""" 

172 

173 

174# Copyright (c) IPython Development Team. 

175# Distributed under the terms of the Modified BSD License. 

176# 

177# Some of this code originated from rlcompleter in the Python standard library 

178# Copyright (C) 2001 Python Software Foundation, www.python.org 

179 

180from __future__ import annotations 

181import builtins as builtin_mod 

182import enum 

183import glob 

184import inspect 

185import itertools 

186import keyword 

187import ast 

188import os 

189import re 

190import string 

191import sys 

192import tokenize 

193import time 

194import unicodedata 

195import uuid 

196import warnings 

197from ast import literal_eval 

198from collections import defaultdict 

199from contextlib import contextmanager 

200from dataclasses import dataclass 

201from functools import cached_property, partial 

202from types import SimpleNamespace 

203from typing import ( 

204 Iterable, 

205 Iterator, 

206 Union, 

207 Any, 

208 Sequence, 

209 Optional, 

210 TYPE_CHECKING, 

211 Sized, 

212 TypeVar, 

213 Literal, 

214) 

215 

216from IPython.core.guarded_eval import guarded_eval, EvaluationContext 

217from IPython.core.error import TryNext 

218from IPython.core.inputtransformer2 import ESC_MAGIC 

219from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol 

220from IPython.testing.skipdoctest import skip_doctest 

221from IPython.utils import generics 

222from IPython.utils.PyColorize import theme_table 

223from IPython.utils.decorators import sphinx_options 

224from IPython.utils.dir2 import dir2, get_real_method 

225from IPython.utils.path import ensure_dir_exists 

226from IPython.utils.process import arg_split 

227from traitlets import ( 

228 Bool, 

229 Enum, 

230 Int, 

231 List as ListTrait, 

232 Unicode, 

233 Dict as DictTrait, 

234 DottedObjectName, 

235 Union as UnionTrait, 

236 observe, 

237) 

238from traitlets.config.configurable import Configurable 

239from traitlets.utils.importstring import import_item 

240 

241import __main__ 

242 

243from typing import cast 

244 

245if sys.version_info < (3, 12): 

246 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

247else: 

248 from typing import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

249 

250 

251# skip module docstests 

252__skip_doctest__ = True 

253 

254 

255try: 

256 import jedi 

257 jedi.settings.case_insensitive_completion = False 

258 import jedi.api.helpers 

259 import jedi.api.classes 

260 JEDI_INSTALLED = True 

261except ImportError: 

262 JEDI_INSTALLED = False 

263 

264 

265# ----------------------------------------------------------------------------- 

266# Globals 

267#----------------------------------------------------------------------------- 

268 

269# ranges where we have most of the valid unicode names. We could be more finer 

270# grained but is it worth it for performance While unicode have character in the 

271# range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I 

272# write this). With below range we cover them all, with a density of ~67% 

273# biggest next gap we consider only adds up about 1% density and there are 600 

274# gaps that would need hard coding. 

275_UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)] 

276 

277# Public API 

278__all__ = ["Completer", "IPCompleter"] 

279 

280if sys.platform == 'win32': 

281 PROTECTABLES = ' ' 

282else: 

283 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&' 

284 

285# Protect against returning an enormous number of completions which the frontend 

286# may have trouble processing. 

287MATCHES_LIMIT = 500 

288 

289# Completion type reported when no type can be inferred. 

290_UNKNOWN_TYPE = "<unknown>" 

291 

292# sentinel value to signal lack of a match 

293not_found = object() 

294 

295class ProvisionalCompleterWarning(FutureWarning): 

296 """ 

297 Exception raise by an experimental feature in this module. 

298 

299 Wrap code in :any:`provisionalcompleter` context manager if you 

300 are certain you want to use an unstable feature. 

301 """ 

302 pass 

303 

304warnings.filterwarnings('error', category=ProvisionalCompleterWarning) 

305 

306 

307@skip_doctest 

308@contextmanager 

309def provisionalcompleter(action='ignore'): 

310 """ 

311 This context manager has to be used in any place where unstable completer 

312 behavior and API may be called. 

313 

314 >>> with provisionalcompleter(): 

315 ... completer.do_experimental_things() # works 

316 

317 >>> completer.do_experimental_things() # raises. 

318 

319 .. note:: 

320 

321 Unstable 

322 

323 By using this context manager you agree that the API in use may change 

324 without warning, and that you won't complain if they do so. 

325 

326 You also understand that, if the API is not to your liking, you should report 

327 a bug to explain your use case upstream. 

328 

329 We'll be happy to get your feedback, feature requests, and improvements on 

330 any of the unstable APIs! 

331 """ 

332 with warnings.catch_warnings(): 

333 warnings.filterwarnings(action, category=ProvisionalCompleterWarning) 

334 yield 

335 

336 

337def has_open_quotes(s: str) -> Union[str, bool]: 

338 """Return whether a string has open quotes. 

339 

340 This simply counts whether the number of quote characters of either type in 

341 the string is odd. 

342 

343 Returns 

344 ------- 

345 If there is an open quote, the quote character is returned. Else, return 

346 False. 

347 """ 

348 # We check " first, then ', so complex cases with nested quotes will get 

349 # the " to take precedence. 

350 if s.count('"') % 2: 

351 return '"' 

352 elif s.count("'") % 2: 

353 return "'" 

354 else: 

355 return False 

356 

357 

358def protect_filename(s: str, protectables: str = PROTECTABLES) -> str: 

359 """Escape a string to protect certain characters.""" 

360 if set(s) & set(protectables): 

361 if sys.platform == "win32": 

362 return '"' + s + '"' 

363 else: 

364 return "".join(("\\" + c if c in protectables else c) for c in s) 

365 else: 

366 return s 

367 

368 

369def expand_user(path: str) -> tuple[str, bool, str]: 

370 """Expand ``~``-style usernames in strings. 

371 

372 This is similar to :func:`os.path.expanduser`, but it computes and returns 

373 extra information that will be useful if the input was being used in 

374 computing completions, and you wish to return the completions with the 

375 original '~' instead of its expanded value. 

376 

377 Parameters 

378 ---------- 

379 path : str 

380 String to be expanded. If no ~ is present, the output is the same as the 

381 input. 

382 

383 Returns 

384 ------- 

385 newpath : str 

386 Result of ~ expansion in the input path. 

387 tilde_expand : bool 

388 Whether any expansion was performed or not. 

389 tilde_val : str 

390 The value that ~ was replaced with. 

391 """ 

392 # Default values 

393 tilde_expand = False 

394 tilde_val = '' 

395 newpath = path 

396 

397 if path.startswith('~'): 

398 tilde_expand = True 

399 rest = len(path)-1 

400 newpath = os.path.expanduser(path) 

401 if rest: 

402 tilde_val = newpath[:-rest] 

403 else: 

404 tilde_val = newpath 

405 

406 return newpath, tilde_expand, tilde_val 

407 

408 

409def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str: 

410 """Does the opposite of expand_user, with its outputs. 

411 """ 

412 if tilde_expand: 

413 return path.replace(tilde_val, '~') 

414 else: 

415 return path 

416 

417 

418def completions_sorting_key(word): 

419 """key for sorting completions 

420 

421 This does several things: 

422 

423 - Demote any completions starting with underscores to the end 

424 - Insert any %magic and %%cellmagic completions in the alphabetical order 

425 by their name 

426 """ 

427 prio1, prio2 = 0, 0 

428 

429 if word.startswith('__'): 

430 prio1 = 2 

431 elif word.startswith('_'): 

432 prio1 = 1 

433 

434 if word.endswith('='): 

435 prio1 = -1 

436 

437 if word.startswith('%%'): 

438 # If there's another % in there, this is something else, so leave it alone 

439 if "%" not in word[2:]: 

440 word = word[2:] 

441 prio2 = 2 

442 elif word.startswith('%'): 

443 if "%" not in word[1:]: 

444 word = word[1:] 

445 prio2 = 1 

446 

447 return prio1, word, prio2 

448 

449 

450class _FakeJediCompletion: 

451 """ 

452 This is a workaround to communicate to the UI that Jedi has crashed and to 

453 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true. 

454 

455 Added in IPython 6.0 so should likely be removed for 7.0 

456 

457 """ 

458 

459 def __init__(self, name): 

460 

461 self.name = name 

462 self.complete = name 

463 self.type = 'crashed' 

464 self.name_with_symbols = name 

465 self.signature = "" 

466 self._origin = "fake" 

467 self.text = "crashed" 

468 

469 def __repr__(self): 

470 return '<Fake completion object jedi has crashed>' 

471 

472 

473_JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion] 

474 

475 

476class Completion: 

477 """ 

478 Completion object used and returned by IPython completers. 

479 

480 .. warning:: 

481 

482 Unstable 

483 

484 This function is unstable, API may change without warning. 

485 It will also raise unless use in proper context manager. 

486 

487 This act as a middle ground :any:`Completion` object between the 

488 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion 

489 object. While Jedi need a lot of information about evaluator and how the 

490 code should be ran/inspected, PromptToolkit (and other frontend) mostly 

491 need user facing information. 

492 

493 - Which range should be replaced replaced by what. 

494 - Some metadata (like completion type), or meta information to displayed to 

495 the use user. 

496 

497 For debugging purpose we can also store the origin of the completion (``jedi``, 

498 ``IPython.python_matches``, ``IPython.magics_matches``...). 

499 """ 

500 

501 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin'] 

502 

503 def __init__( 

504 self, 

505 start: int, 

506 end: int, 

507 text: str, 

508 *, 

509 type: Optional[str] = None, 

510 _origin="", 

511 signature="", 

512 ) -> None: 

513 warnings.warn( 

514 "``Completion`` is a provisional API (as of IPython 6.0). " 

515 "It may change without warnings. " 

516 "Use in corresponding context manager.", 

517 category=ProvisionalCompleterWarning, 

518 stacklevel=2, 

519 ) 

520 

521 self.start = start 

522 self.end = end 

523 self.text = text 

524 self.type = type 

525 self.signature = signature 

526 self._origin = _origin 

527 

528 def __repr__(self): 

529 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \ 

530 (self.start, self.end, self.text, self.type or '?', self.signature or '?') 

531 

532 def __eq__(self, other) -> bool: 

533 """ 

534 Equality and hash do not hash the type (as some completer may not be 

535 able to infer the type), but are use to (partially) de-duplicate 

536 completion. 

537 

538 Completely de-duplicating completion is a bit tricker that just 

539 comparing as it depends on surrounding text, which Completions are not 

540 aware of. 

541 """ 

542 return self.start == other.start and \ 

543 self.end == other.end and \ 

544 self.text == other.text 

545 

546 def __hash__(self): 

547 return hash((self.start, self.end, self.text)) 

548 

549 

550class SimpleCompletion: 

551 """Completion item to be included in the dictionary returned by new-style Matcher (API v2). 

552 

553 .. warning:: 

554 

555 Provisional 

556 

557 This class is used to describe the currently supported attributes of 

558 simple completion items, and any additional implementation details 

559 should not be relied on. Additional attributes may be included in 

560 future versions, and meaning of text disambiguated from the current 

561 dual meaning of "text to insert" and "text to used as a label". 

562 """ 

563 

564 __slots__ = ["text", "type"] 

565 

566 def __init__(self, text: str, *, type: Optional[str] = None): 

567 self.text = text 

568 self.type = type 

569 

570 def __repr__(self): 

571 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>" 

572 

573 

574class _MatcherResultBase(TypedDict): 

575 """Definition of dictionary to be returned by new-style Matcher (API v2).""" 

576 

577 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token. 

578 matched_fragment: NotRequired[str] 

579 

580 #: Whether to suppress results from all other matchers (True), some 

581 #: matchers (set of identifiers) or none (False); default is False. 

582 suppress: NotRequired[Union[bool, set[str]]] 

583 

584 #: Identifiers of matchers which should NOT be suppressed when this matcher 

585 #: requests to suppress all other matchers; defaults to an empty set. 

586 do_not_suppress: NotRequired[set[str]] 

587 

588 #: Are completions already ordered and should be left as-is? default is False. 

589 ordered: NotRequired[bool] 

590 

591 

592@sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"]) 

593class SimpleMatcherResult(_MatcherResultBase, TypedDict): 

594 """Result of new-style completion matcher.""" 

595 

596 # note: TypedDict is added again to the inheritance chain 

597 # in order to get __orig_bases__ for documentation 

598 

599 #: List of candidate completions 

600 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion] 

601 

602 

603class _JediMatcherResult(_MatcherResultBase): 

604 """Matching result returned by Jedi (will be processed differently)""" 

605 

606 #: list of candidate completions 

607 completions: Iterator[_JediCompletionLike] 

608 

609 

610AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion] 

611AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion) 

612 

613 

614@dataclass 

615class CompletionContext: 

616 """Completion context provided as an argument to matchers in the Matcher API v2.""" 

617 

618 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`) 

619 # which was not explicitly visible as an argument of the matcher, making any refactor 

620 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers 

621 # from the completer, and make substituting them in sub-classes easier. 

622 

623 #: Relevant fragment of code directly preceding the cursor. 

624 #: The extraction of token is implemented via splitter heuristic 

625 #: (following readline behaviour for legacy reasons), which is user configurable 

626 #: (by switching the greedy mode). 

627 token: str 

628 

629 #: The full available content of the editor or buffer 

630 full_text: str 

631 

632 #: Cursor position in the line (the same for ``full_text`` and ``text``). 

633 cursor_position: int 

634 

635 #: Cursor line in ``full_text``. 

636 cursor_line: int 

637 

638 #: The maximum number of completions that will be used downstream. 

639 #: Matchers can use this information to abort early. 

640 #: The built-in Jedi matcher is currently excepted from this limit. 

641 # If not given, return all possible completions. 

642 limit: Optional[int] 

643 

644 @cached_property 

645 def text_until_cursor(self) -> str: 

646 return self.line_with_cursor[: self.cursor_position] 

647 

648 @cached_property 

649 def line_with_cursor(self) -> str: 

650 return self.full_text.split("\n")[self.cursor_line] 

651 

652 

653#: Matcher results for API v2. 

654MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult] 

655 

656 

657class _MatcherAPIv1Base(Protocol): 

658 def __call__(self, text: str) -> list[str]: 

659 """Call signature.""" 

660 ... 

661 

662 #: Used to construct the default matcher identifier 

663 __qualname__: str 

664 

665 

666class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol): 

667 #: API version 

668 matcher_api_version: Optional[Literal[1]] 

669 

670 def __call__(self, text: str) -> list[str]: 

671 """Call signature.""" 

672 ... 

673 

674 

675#: Protocol describing Matcher API v1. 

676MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total] 

677 

678 

679class MatcherAPIv2(Protocol): 

680 """Protocol describing Matcher API v2.""" 

681 

682 #: API version 

683 matcher_api_version: Literal[2] = 2 

684 

685 def __call__(self, context: CompletionContext) -> MatcherResult: 

686 """Call signature.""" 

687 ... 

688 

689 #: Used to construct the default matcher identifier 

690 __qualname__: str 

691 

692 

693Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2] 

694 

695 

696def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]: 

697 api_version = _get_matcher_api_version(matcher) 

698 return api_version == 1 

699 

700 

701def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]: 

702 api_version = _get_matcher_api_version(matcher) 

703 return api_version == 2 

704 

705 

706def _is_sizable(value: Any) -> TypeGuard[Sized]: 

707 """Determines whether objects is sizable""" 

708 return hasattr(value, "__len__") 

709 

710 

711def _is_iterator(value: Any) -> TypeGuard[Iterator]: 

712 """Determines whether objects is sizable""" 

713 return hasattr(value, "__next__") 

714 

715 

716def has_any_completions(result: MatcherResult) -> bool: 

717 """Check if any result includes any completions.""" 

718 completions = result["completions"] 

719 if _is_sizable(completions): 

720 return len(completions) != 0 

721 if _is_iterator(completions): 

722 try: 

723 old_iterator = completions 

724 first = next(old_iterator) 

725 result["completions"] = cast( 

726 Iterator[SimpleCompletion], 

727 itertools.chain([first], old_iterator), 

728 ) 

729 return True 

730 except StopIteration: 

731 return False 

732 raise ValueError( 

733 "Completions returned by matcher need to be an Iterator or a Sizable" 

734 ) 

735 

736 

737def completion_matcher( 

738 *, 

739 priority: Optional[float] = None, 

740 identifier: Optional[str] = None, 

741 api_version: int = 1, 

742) -> Callable[[Matcher], Matcher]: 

743 """Adds attributes describing the matcher. 

744 

745 Parameters 

746 ---------- 

747 priority : Optional[float] 

748 The priority of the matcher, determines the order of execution of matchers. 

749 Higher priority means that the matcher will be executed first. Defaults to 0. 

750 identifier : Optional[str] 

751 identifier of the matcher allowing users to modify the behaviour via traitlets, 

752 and also used to for debugging (will be passed as ``origin`` with the completions). 

753 

754 Defaults to matcher function's ``__qualname__`` (for example, 

755 ``IPCompleter.file_matcher`` for the built-in matched defined 

756 as a ``file_matcher`` method of the ``IPCompleter`` class). 

757 api_version: Optional[int] 

758 version of the Matcher API used by this matcher. 

759 Currently supported values are 1 and 2. 

760 Defaults to 1. 

761 """ 

762 

763 def wrapper(func: Matcher): 

764 func.matcher_priority = priority or 0 # type: ignore 

765 func.matcher_identifier = identifier or func.__qualname__ # type: ignore 

766 func.matcher_api_version = api_version # type: ignore 

767 if TYPE_CHECKING: 

768 if api_version == 1: 

769 func = cast(MatcherAPIv1, func) 

770 elif api_version == 2: 

771 func = cast(MatcherAPIv2, func) 

772 return func 

773 

774 return wrapper 

775 

776 

777def _get_matcher_priority(matcher: Matcher): 

778 return getattr(matcher, "matcher_priority", 0) 

779 

780 

781def _get_matcher_id(matcher: Matcher): 

782 return getattr(matcher, "matcher_identifier", matcher.__qualname__) 

783 

784 

785def _get_matcher_api_version(matcher): 

786 return getattr(matcher, "matcher_api_version", 1) 

787 

788 

789context_matcher = partial(completion_matcher, api_version=2) 

790 

791 

792_IC = Iterable[Completion] 

793 

794 

795def _deduplicate_completions(text: str, completions: _IC)-> _IC: 

796 """ 

797 Deduplicate a set of completions. 

798 

799 .. warning:: 

800 

801 Unstable 

802 

803 This function is unstable, API may change without warning. 

804 

805 Parameters 

806 ---------- 

807 text : str 

808 text that should be completed. 

809 completions : Iterator[Completion] 

810 iterator over the completions to deduplicate 

811 

812 Yields 

813 ------ 

814 `Completions` objects 

815 Completions coming from multiple sources, may be different but end up having 

816 the same effect when applied to ``text``. If this is the case, this will 

817 consider completions as equal and only emit the first encountered. 

818 Not folded in `completions()` yet for debugging purpose, and to detect when 

819 the IPython completer does return things that Jedi does not, but should be 

820 at some point. 

821 """ 

822 completions = list(completions) 

823 if not completions: 

824 return 

825 

826 new_start = min(c.start for c in completions) 

827 new_end = max(c.end for c in completions) 

828 

829 seen = set() 

830 for c in completions: 

831 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

832 if new_text not in seen: 

833 yield c 

834 seen.add(new_text) 

835 

836 

837def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC: 

838 """ 

839 Rectify a set of completions to all have the same ``start`` and ``end`` 

840 

841 .. warning:: 

842 

843 Unstable 

844 

845 This function is unstable, API may change without warning. 

846 It will also raise unless use in proper context manager. 

847 

848 Parameters 

849 ---------- 

850 text : str 

851 text that should be completed. 

852 completions : Iterator[Completion] 

853 iterator over the completions to rectify 

854 _debug : bool 

855 Log failed completion 

856 

857 Notes 

858 ----- 

859 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though 

860 the Jupyter Protocol requires them to behave like so. This will readjust 

861 the completion to have the same ``start`` and ``end`` by padding both 

862 extremities with surrounding text. 

863 

864 During stabilisation should support a ``_debug`` option to log which 

865 completion are return by the IPython completer and not found in Jedi in 

866 order to make upstream bug report. 

867 """ 

868 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). " 

869 "It may change without warnings. " 

870 "Use in corresponding context manager.", 

871 category=ProvisionalCompleterWarning, stacklevel=2) 

872 

873 completions = list(completions) 

874 if not completions: 

875 return 

876 starts = (c.start for c in completions) 

877 ends = (c.end for c in completions) 

878 

879 new_start = min(starts) 

880 new_end = max(ends) 

881 

882 seen_jedi = set() 

883 seen_python_matches = set() 

884 for c in completions: 

885 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

886 if c._origin == 'jedi': 

887 seen_jedi.add(new_text) 

888 elif c._origin == "IPCompleter.python_matcher": 

889 seen_python_matches.add(new_text) 

890 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature) 

891 diff = seen_python_matches.difference(seen_jedi) 

892 if diff and _debug: 

893 print('IPython.python matches have extras:', diff) 

894 

895 

896if sys.platform == 'win32': 

897 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?' 

898else: 

899 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?' 

900 

901GREEDY_DELIMS = ' =\r\n' 

902 

903 

904class CompletionSplitter: 

905 """An object to split an input line in a manner similar to readline. 

906 

907 By having our own implementation, we can expose readline-like completion in 

908 a uniform manner to all frontends. This object only needs to be given the 

909 line of text to be split and the cursor position on said line, and it 

910 returns the 'word' to be completed on at the cursor after splitting the 

911 entire line. 

912 

913 What characters are used as splitting delimiters can be controlled by 

914 setting the ``delims`` attribute (this is a property that internally 

915 automatically builds the necessary regular expression)""" 

916 

917 # Private interface 

918 

919 # A string of delimiter characters. The default value makes sense for 

920 # IPython's most typical usage patterns. 

921 _delims = DELIMS 

922 

923 # The expression (a normal string) to be compiled into a regular expression 

924 # for actual splitting. We store it as an attribute mostly for ease of 

925 # debugging, since this type of code can be so tricky to debug. 

926 _delim_expr = None 

927 

928 # The regular expression that does the actual splitting 

929 _delim_re = None 

930 

931 def __init__(self, delims=None): 

932 delims = CompletionSplitter._delims if delims is None else delims 

933 self.delims = delims 

934 

935 @property 

936 def delims(self): 

937 """Return the string of delimiter characters.""" 

938 return self._delims 

939 

940 @delims.setter 

941 def delims(self, delims): 

942 """Set the delimiters for line splitting.""" 

943 expr = '[' + ''.join('\\'+ c for c in delims) + ']' 

944 self._delim_re = re.compile(expr) 

945 self._delims = delims 

946 self._delim_expr = expr 

947 

948 def split_line(self, line, cursor_pos=None): 

949 """Split a line of text with a cursor at the given position. 

950 """ 

951 cut_line = line if cursor_pos is None else line[:cursor_pos] 

952 return self._delim_re.split(cut_line)[-1] 

953 

954 

955 

956class Completer(Configurable): 

957 

958 greedy = Bool( 

959 False, 

960 help="""Activate greedy completion. 

961 

962 .. deprecated:: 8.8 

963 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead. 

964 

965 When enabled in IPython 8.8 or newer, changes configuration as follows: 

966 

967 - ``Completer.evaluation = 'unsafe'`` 

968 - ``Completer.auto_close_dict_keys = True`` 

969 """, 

970 ).tag(config=True) 

971 

972 evaluation = Enum( 

973 ("forbidden", "minimal", "limited", "unsafe", "dangerous"), 

974 default_value="limited", 

975 help="""Policy for code evaluation under completion. 

976 

977 Successive options allow to enable more eager evaluation for better 

978 completion suggestions, including for nested dictionaries, nested lists, 

979 or even results of function calls. 

980 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user 

981 code on :kbd:`Tab` with potentially unwanted or dangerous side effects. 

982 

983 Allowed values are: 

984 

985 - ``forbidden``: no evaluation of code is permitted, 

986 - ``minimal``: evaluation of literals and access to built-in namespace; 

987 no item/attribute evaluation, no access to locals/globals, 

988 no evaluation of any operations or comparisons. 

989 - ``limited``: access to all namespaces, evaluation of hard-coded methods 

990 (for example: :any:`dict.keys`, :any:`object.__getattr__`, 

991 :any:`object.__getitem__`) on allow-listed objects (for example: 

992 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``), 

993 - ``unsafe``: evaluation of all methods and function calls but not of 

994 syntax with side-effects like `del x`, 

995 - ``dangerous``: completely arbitrary evaluation; does not support auto-import. 

996 

997 To override specific elements of the policy, you can use ``policy_overrides`` trait. 

998 """, 

999 ).tag(config=True) 

1000 

1001 use_jedi = Bool(default_value=JEDI_INSTALLED, 

1002 help="Experimental: Use Jedi to generate autocompletions. " 

1003 "Default to True if jedi is installed.").tag(config=True) 

1004 

1005 jedi_compute_type_timeout = Int(default_value=400, 

1006 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types. 

1007 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt 

1008 performance by preventing jedi to build its cache. 

1009 """).tag(config=True) 

1010 

1011 debug = Bool(default_value=False, 

1012 help='Enable debug for the Completer. Mostly print extra ' 

1013 'information for experimental jedi integration.')\ 

1014 .tag(config=True) 

1015 

1016 backslash_combining_completions = Bool(True, 

1017 help="Enable unicode completions, e.g. \\alpha<tab> . " 

1018 "Includes completion of latex commands, unicode names, and expanding " 

1019 "unicode characters back to latex commands.").tag(config=True) 

1020 

1021 auto_close_dict_keys = Bool( 

1022 False, 

1023 help=""" 

1024 Enable auto-closing dictionary keys. 

1025 

1026 When enabled string keys will be suffixed with a final quote 

1027 (matching the opening quote), tuple keys will also receive a 

1028 separating comma if needed, and keys which are final will 

1029 receive a closing bracket (``]``). 

1030 """, 

1031 ).tag(config=True) 

1032 

1033 policy_overrides = DictTrait( 

1034 default_value={}, 

1035 key_trait=Unicode(), 

1036 help="""Overrides for policy evaluation. 

1037 

1038 For example, to enable auto-import on completion specify: 

1039 

1040 .. code-block:: 

1041 

1042 ipython --Completer.policy_overrides='{"allow_auto_import": True}' --Completer.use_jedi=False 

1043 

1044 """, 

1045 ).tag(config=True) 

1046 

1047 auto_import_method = DottedObjectName( 

1048 default_value="importlib.import_module", 

1049 allow_none=True, 

1050 help="""\ 

1051 Provisional: 

1052 This is a provisional API in IPython 9.3, it may change without warnings. 

1053 

1054 A fully qualified path to an auto-import method for use by completer. 

1055 The function should take a single string and return `ModuleType` and 

1056 can raise `ImportError` exception if module is not found. 

1057 

1058 The default auto-import implementation does not populate the user namespace with the imported module. 

1059 """, 

1060 ).tag(config=True) 

1061 

1062 def __init__(self, namespace=None, global_namespace=None, **kwargs): 

1063 """Create a new completer for the command line. 

1064 

1065 Completer(namespace=ns, global_namespace=ns2) -> completer instance. 

1066 

1067 If unspecified, the default namespace where completions are performed 

1068 is __main__ (technically, __main__.__dict__). Namespaces should be 

1069 given as dictionaries. 

1070 

1071 An optional second namespace can be given. This allows the completer 

1072 to handle cases where both the local and global scopes need to be 

1073 distinguished. 

1074 """ 

1075 

1076 # Don't bind to namespace quite yet, but flag whether the user wants a 

1077 # specific namespace or to use __main__.__dict__. This will allow us 

1078 # to bind to __main__.__dict__ at completion time, not now. 

1079 if namespace is None: 

1080 self.use_main_ns = True 

1081 else: 

1082 self.use_main_ns = False 

1083 self.namespace = namespace 

1084 

1085 # The global namespace, if given, can be bound directly 

1086 if global_namespace is None: 

1087 self.global_namespace = {} 

1088 else: 

1089 self.global_namespace = global_namespace 

1090 

1091 self.custom_matchers = [] 

1092 

1093 super(Completer, self).__init__(**kwargs) 

1094 

1095 def complete(self, text, state): 

1096 """Return the next possible completion for 'text'. 

1097 

1098 This is called successively with state == 0, 1, 2, ... until it 

1099 returns None. The completion should begin with 'text'. 

1100 

1101 """ 

1102 if self.use_main_ns: 

1103 self.namespace = __main__.__dict__ 

1104 

1105 if state == 0: 

1106 if "." in text: 

1107 self.matches = self.attr_matches(text) 

1108 else: 

1109 self.matches = self.global_matches(text) 

1110 try: 

1111 return self.matches[state] 

1112 except IndexError: 

1113 return None 

1114 

1115 def global_matches(self, text): 

1116 """Compute matches when text is a simple name. 

1117 

1118 Return a list of all keywords, built-in functions and names currently 

1119 defined in self.namespace or self.global_namespace that match. 

1120 

1121 """ 

1122 matches = [] 

1123 match_append = matches.append 

1124 n = len(text) 

1125 for lst in [ 

1126 keyword.kwlist, 

1127 builtin_mod.__dict__.keys(), 

1128 list(self.namespace.keys()), 

1129 list(self.global_namespace.keys()), 

1130 ]: 

1131 for word in lst: 

1132 if word[:n] == text and word != "__builtins__": 

1133 match_append(word) 

1134 

1135 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z") 

1136 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]: 

1137 shortened = { 

1138 "_".join([sub[0] for sub in word.split("_")]): word 

1139 for word in lst 

1140 if snake_case_re.match(word) 

1141 } 

1142 for word in shortened.keys(): 

1143 if word[:n] == text and word != "__builtins__": 

1144 match_append(shortened[word]) 

1145 return matches 

1146 

1147 def attr_matches(self, text): 

1148 """Compute matches when text contains a dot. 

1149 

1150 Assuming the text is of the form NAME.NAME....[NAME], and is 

1151 evaluatable in self.namespace or self.global_namespace, it will be 

1152 evaluated and its attributes (as revealed by dir()) are used as 

1153 possible completions. (For class instances, class members are 

1154 also considered.) 

1155 

1156 WARNING: this can still invoke arbitrary C code, if an object 

1157 with a __getattr__ hook is evaluated. 

1158 

1159 """ 

1160 return self._attr_matches(text)[0] 

1161 

1162 # we simple attribute matching with normal identifiers. 

1163 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$") 

1164 

1165 def _strip_code_before_operator(self, code: str) -> str: 

1166 o_parens = {"(", "[", "{"} 

1167 c_parens = {")", "]", "}"} 

1168 

1169 # Dry-run tokenize to catch errors 

1170 try: 

1171 _ = list(tokenize.generate_tokens(iter(code.splitlines()).__next__)) 

1172 except tokenize.TokenError: 

1173 # Try trimming the expression and retrying 

1174 trimmed_code = self._trim_expr(code) 

1175 try: 

1176 _ = list( 

1177 tokenize.generate_tokens(iter(trimmed_code.splitlines()).__next__) 

1178 ) 

1179 code = trimmed_code 

1180 except tokenize.TokenError: 

1181 return code 

1182 

1183 tokens = _parse_tokens(code) 

1184 encountered_operator = False 

1185 after_operator = [] 

1186 nesting_level = 0 

1187 

1188 for t in tokens: 

1189 if t.type == tokenize.OP: 

1190 if t.string in o_parens: 

1191 nesting_level += 1 

1192 elif t.string in c_parens: 

1193 nesting_level -= 1 

1194 elif t.string != "." and nesting_level == 0: 

1195 encountered_operator = True 

1196 after_operator = [] 

1197 continue 

1198 

1199 if encountered_operator: 

1200 after_operator.append(t.string) 

1201 

1202 if encountered_operator: 

1203 return "".join(after_operator) 

1204 else: 

1205 return code 

1206 

1207 def _attr_matches( 

1208 self, text: str, include_prefix: bool = True 

1209 ) -> tuple[Sequence[str], str]: 

1210 m2 = self._ATTR_MATCH_RE.match(text) 

1211 if not m2: 

1212 return [], "" 

1213 expr, attr = m2.group(1, 2) 

1214 try: 

1215 expr = self._strip_code_before_operator(expr) 

1216 except tokenize.TokenError: 

1217 pass 

1218 

1219 obj = self._evaluate_expr(expr) 

1220 if obj is not_found: 

1221 return [], "" 

1222 

1223 if self.limit_to__all__ and hasattr(obj, '__all__'): 

1224 words = get__all__entries(obj) 

1225 else: 

1226 words = dir2(obj) 

1227 

1228 try: 

1229 words = generics.complete_object(obj, words) 

1230 except TryNext: 

1231 pass 

1232 except AssertionError: 

1233 raise 

1234 except Exception: 

1235 # Silence errors from completion function 

1236 pass 

1237 # Build match list to return 

1238 n = len(attr) 

1239 

1240 # Note: ideally we would just return words here and the prefix 

1241 # reconciliator would know that we intend to append to rather than 

1242 # replace the input text; this requires refactoring to return range 

1243 # which ought to be replaced (as does jedi). 

1244 if include_prefix: 

1245 tokens = _parse_tokens(expr) 

1246 rev_tokens = reversed(tokens) 

1247 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1248 name_turn = True 

1249 

1250 parts = [] 

1251 for token in rev_tokens: 

1252 if token.type in skip_over: 

1253 continue 

1254 if token.type == tokenize.NAME and name_turn: 

1255 parts.append(token.string) 

1256 name_turn = False 

1257 elif ( 

1258 token.type == tokenize.OP and token.string == "." and not name_turn 

1259 ): 

1260 parts.append(token.string) 

1261 name_turn = True 

1262 else: 

1263 # short-circuit if not empty nor name token 

1264 break 

1265 

1266 prefix_after_space = "".join(reversed(parts)) 

1267 else: 

1268 prefix_after_space = "" 

1269 

1270 return ( 

1271 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr], 

1272 "." + attr, 

1273 ) 

1274 

1275 def _trim_expr(self, code: str) -> str: 

1276 """ 

1277 Trim the code until it is a valid expression and not a tuple; 

1278 

1279 return the trimmed expression for guarded_eval. 

1280 """ 

1281 while code: 

1282 code = code[1:] 

1283 try: 

1284 res = ast.parse(code) 

1285 except SyntaxError: 

1286 continue 

1287 

1288 assert res is not None 

1289 if len(res.body) != 1: 

1290 continue 

1291 expr = res.body[0].value 

1292 if isinstance(expr, ast.Tuple) and not code[-1] == ")": 

1293 # we skip implicit tuple, like when trimming `fun(a,b`<completion> 

1294 # as `a,b` would be a tuple, and we actually expect to get only `b` 

1295 continue 

1296 return code 

1297 return "" 

1298 

1299 def _evaluate_expr(self, expr): 

1300 obj = not_found 

1301 done = False 

1302 while not done and expr: 

1303 try: 

1304 obj = guarded_eval( 

1305 expr, 

1306 EvaluationContext( 

1307 globals=self.global_namespace, 

1308 locals=self.namespace, 

1309 evaluation=self.evaluation, 

1310 auto_import=self._auto_import, 

1311 policy_overrides=self.policy_overrides, 

1312 ), 

1313 ) 

1314 done = True 

1315 except (SyntaxError, TypeError): 

1316 # TypeError can show up with something like `+ d` 

1317 # where `d` is a dictionary. 

1318 

1319 # trim the expression to remove any invalid prefix 

1320 # e.g. user starts `(d[`, so we get `expr = '(d'`, 

1321 # where parenthesis is not closed. 

1322 # TODO: make this faster by reusing parts of the computation? 

1323 expr = self._trim_expr(expr) 

1324 except Exception as e: 

1325 if self.debug: 

1326 print("Evaluation exception", e) 

1327 done = True 

1328 return obj 

1329 

1330 @property 

1331 def _auto_import(self): 

1332 if self.auto_import_method is None: 

1333 return None 

1334 if not hasattr(self, "_auto_import_func"): 

1335 self._auto_import_func = import_item(self.auto_import_method) 

1336 return self._auto_import_func 

1337 

1338 

1339def get__all__entries(obj): 

1340 """returns the strings in the __all__ attribute""" 

1341 try: 

1342 words = getattr(obj, '__all__') 

1343 except Exception: 

1344 return [] 

1345 

1346 return [w for w in words if isinstance(w, str)] 

1347 

1348 

1349class _DictKeyState(enum.Flag): 

1350 """Represent state of the key match in context of other possible matches. 

1351 

1352 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple. 

1353 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`. 

1354 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added. 

1355 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}` 

1356 """ 

1357 

1358 BASELINE = 0 

1359 END_OF_ITEM = enum.auto() 

1360 END_OF_TUPLE = enum.auto() 

1361 IN_TUPLE = enum.auto() 

1362 

1363 

1364def _parse_tokens(c): 

1365 """Parse tokens even if there is an error.""" 

1366 tokens = [] 

1367 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__) 

1368 while True: 

1369 try: 

1370 tokens.append(next(token_generator)) 

1371 except tokenize.TokenError: 

1372 return tokens 

1373 except StopIteration: 

1374 return tokens 

1375 

1376 

1377def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]: 

1378 """Match any valid Python numeric literal in a prefix of dictionary keys. 

1379 

1380 References: 

1381 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals 

1382 - https://docs.python.org/3/library/tokenize.html 

1383 """ 

1384 if prefix[-1].isspace(): 

1385 # if user typed a space we do not have anything to complete 

1386 # even if there was a valid number token before 

1387 return None 

1388 tokens = _parse_tokens(prefix) 

1389 rev_tokens = reversed(tokens) 

1390 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1391 number = None 

1392 for token in rev_tokens: 

1393 if token.type in skip_over: 

1394 continue 

1395 if number is None: 

1396 if token.type == tokenize.NUMBER: 

1397 number = token.string 

1398 continue 

1399 else: 

1400 # we did not match a number 

1401 return None 

1402 if token.type == tokenize.OP: 

1403 if token.string == ",": 

1404 break 

1405 if token.string in {"+", "-"}: 

1406 number = token.string + number 

1407 else: 

1408 return None 

1409 return number 

1410 

1411 

1412_INT_FORMATS = { 

1413 "0b": bin, 

1414 "0o": oct, 

1415 "0x": hex, 

1416} 

1417 

1418 

1419def match_dict_keys( 

1420 keys: list[Union[str, bytes, tuple[Union[str, bytes], ...]]], 

1421 prefix: str, 

1422 delims: str, 

1423 extra_prefix: Optional[tuple[Union[str, bytes], ...]] = None, 

1424) -> tuple[str, int, dict[str, _DictKeyState]]: 

1425 """Used by dict_key_matches, matching the prefix to a list of keys 

1426 

1427 Parameters 

1428 ---------- 

1429 keys 

1430 list of keys in dictionary currently being completed. 

1431 prefix 

1432 Part of the text already typed by the user. E.g. `mydict[b'fo` 

1433 delims 

1434 String of delimiters to consider when finding the current key. 

1435 extra_prefix : optional 

1436 Part of the text already typed in multi-key index cases. E.g. for 

1437 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`. 

1438 

1439 Returns 

1440 ------- 

1441 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with 

1442 ``quote`` being the quote that need to be used to close current string. 

1443 ``token_start`` the position where the replacement should start occurring, 

1444 ``matches`` a dictionary of replacement/completion keys on keys and values 

1445 indicating whether the state. 

1446 """ 

1447 prefix_tuple = extra_prefix if extra_prefix else () 

1448 

1449 prefix_tuple_size = sum( 

1450 [ 

1451 # for pandas, do not count slices as taking space 

1452 not isinstance(k, slice) 

1453 for k in prefix_tuple 

1454 ] 

1455 ) 

1456 text_serializable_types = (str, bytes, int, float, slice) 

1457 

1458 def filter_prefix_tuple(key): 

1459 # Reject too short keys 

1460 if len(key) <= prefix_tuple_size: 

1461 return False 

1462 # Reject keys which cannot be serialised to text 

1463 for k in key: 

1464 if not isinstance(k, text_serializable_types): 

1465 return False 

1466 # Reject keys that do not match the prefix 

1467 for k, pt in zip(key, prefix_tuple): 

1468 if k != pt and not isinstance(pt, slice): 

1469 return False 

1470 # All checks passed! 

1471 return True 

1472 

1473 filtered_key_is_final: dict[Union[str, bytes, int, float], _DictKeyState] = ( 

1474 defaultdict(lambda: _DictKeyState.BASELINE) 

1475 ) 

1476 

1477 for k in keys: 

1478 # If at least one of the matches is not final, mark as undetermined. 

1479 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where 

1480 # `111` appears final on first match but is not final on the second. 

1481 

1482 if isinstance(k, tuple): 

1483 if filter_prefix_tuple(k): 

1484 key_fragment = k[prefix_tuple_size] 

1485 filtered_key_is_final[key_fragment] |= ( 

1486 _DictKeyState.END_OF_TUPLE 

1487 if len(k) == prefix_tuple_size + 1 

1488 else _DictKeyState.IN_TUPLE 

1489 ) 

1490 elif prefix_tuple_size > 0: 

1491 # we are completing a tuple but this key is not a tuple, 

1492 # so we should ignore it 

1493 pass 

1494 else: 

1495 if isinstance(k, text_serializable_types): 

1496 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM 

1497 

1498 filtered_keys = filtered_key_is_final.keys() 

1499 

1500 if not prefix: 

1501 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()} 

1502 

1503 quote_match = re.search("(?:\"|')", prefix) 

1504 is_user_prefix_numeric = False 

1505 

1506 if quote_match: 

1507 quote = quote_match.group() 

1508 valid_prefix = prefix + quote 

1509 try: 

1510 prefix_str = literal_eval(valid_prefix) 

1511 except Exception: 

1512 return "", 0, {} 

1513 else: 

1514 # If it does not look like a string, let's assume 

1515 # we are dealing with a number or variable. 

1516 number_match = _match_number_in_dict_key_prefix(prefix) 

1517 

1518 # We do not want the key matcher to suggest variable names so we yield: 

1519 if number_match is None: 

1520 # The alternative would be to assume that user forgort the quote 

1521 # and if the substring matches, suggest adding it at the start. 

1522 return "", 0, {} 

1523 

1524 prefix_str = number_match 

1525 is_user_prefix_numeric = True 

1526 quote = "" 

1527 

1528 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$' 

1529 token_match = re.search(pattern, prefix, re.UNICODE) 

1530 assert token_match is not None # silence mypy 

1531 token_start = token_match.start() 

1532 token_prefix = token_match.group() 

1533 

1534 matched: dict[str, _DictKeyState] = {} 

1535 

1536 str_key: Union[str, bytes] 

1537 

1538 for key in filtered_keys: 

1539 if isinstance(key, (int, float)): 

1540 # User typed a number but this key is not a number. 

1541 if not is_user_prefix_numeric: 

1542 continue 

1543 str_key = str(key) 

1544 if isinstance(key, int): 

1545 int_base = prefix_str[:2].lower() 

1546 # if user typed integer using binary/oct/hex notation: 

1547 if int_base in _INT_FORMATS: 

1548 int_format = _INT_FORMATS[int_base] 

1549 str_key = int_format(key) 

1550 else: 

1551 # User typed a string but this key is a number. 

1552 if is_user_prefix_numeric: 

1553 continue 

1554 str_key = key 

1555 try: 

1556 if not str_key.startswith(prefix_str): 

1557 continue 

1558 except (AttributeError, TypeError, UnicodeError): 

1559 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa 

1560 continue 

1561 

1562 # reformat remainder of key to begin with prefix 

1563 rem = str_key[len(prefix_str) :] 

1564 # force repr wrapped in ' 

1565 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"') 

1566 rem_repr = rem_repr[1 + rem_repr.index("'"):-2] 

1567 if quote == '"': 

1568 # The entered prefix is quoted with ", 

1569 # but the match is quoted with '. 

1570 # A contained " hence needs escaping for comparison: 

1571 rem_repr = rem_repr.replace('"', '\\"') 

1572 

1573 # then reinsert prefix from start of token 

1574 match = "%s%s" % (token_prefix, rem_repr) 

1575 

1576 matched[match] = filtered_key_is_final[key] 

1577 return quote, token_start, matched 

1578 

1579 

1580def cursor_to_position(text:str, line:int, column:int)->int: 

1581 """ 

1582 Convert the (line,column) position of the cursor in text to an offset in a 

1583 string. 

1584 

1585 Parameters 

1586 ---------- 

1587 text : str 

1588 The text in which to calculate the cursor offset 

1589 line : int 

1590 Line of the cursor; 0-indexed 

1591 column : int 

1592 Column of the cursor 0-indexed 

1593 

1594 Returns 

1595 ------- 

1596 Position of the cursor in ``text``, 0-indexed. 

1597 

1598 See Also 

1599 -------- 

1600 position_to_cursor : reciprocal of this function 

1601 

1602 """ 

1603 lines = text.split('\n') 

1604 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines))) 

1605 

1606 return sum(len(line) + 1 for line in lines[:line]) + column 

1607 

1608 

1609def position_to_cursor(text: str, offset: int) -> tuple[int, int]: 

1610 """ 

1611 Convert the position of the cursor in text (0 indexed) to a line 

1612 number(0-indexed) and a column number (0-indexed) pair 

1613 

1614 Position should be a valid position in ``text``. 

1615 

1616 Parameters 

1617 ---------- 

1618 text : str 

1619 The text in which to calculate the cursor offset 

1620 offset : int 

1621 Position of the cursor in ``text``, 0-indexed. 

1622 

1623 Returns 

1624 ------- 

1625 (line, column) : (int, int) 

1626 Line of the cursor; 0-indexed, column of the cursor 0-indexed 

1627 

1628 See Also 

1629 -------- 

1630 cursor_to_position : reciprocal of this function 

1631 

1632 """ 

1633 

1634 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text)) 

1635 

1636 before = text[:offset] 

1637 blines = before.split('\n') # ! splitnes trim trailing \n 

1638 line = before.count('\n') 

1639 col = len(blines[-1]) 

1640 return line, col 

1641 

1642 

1643def _safe_isinstance(obj, module, class_name, *attrs): 

1644 """Checks if obj is an instance of module.class_name if loaded 

1645 """ 

1646 if module in sys.modules: 

1647 m = sys.modules[module] 

1648 for attr in [class_name, *attrs]: 

1649 m = getattr(m, attr) 

1650 return isinstance(obj, m) 

1651 

1652 

1653@context_matcher() 

1654def back_unicode_name_matcher(context: CompletionContext): 

1655 """Match Unicode characters back to Unicode name 

1656 

1657 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API. 

1658 """ 

1659 fragment, matches = back_unicode_name_matches(context.text_until_cursor) 

1660 return _convert_matcher_v1_result_to_v2( 

1661 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

1662 ) 

1663 

1664 

1665def back_unicode_name_matches(text: str) -> tuple[str, Sequence[str]]: 

1666 """Match Unicode characters back to Unicode name 

1667 

1668 This does ``☃`` -> ``\\snowman`` 

1669 

1670 Note that snowman is not a valid python3 combining character but will be expanded. 

1671 Though it will not recombine back to the snowman character by the completion machinery. 

1672 

1673 This will not either back-complete standard sequences like \\n, \\b ... 

1674 

1675 .. deprecated:: 8.6 

1676 You can use :meth:`back_unicode_name_matcher` instead. 

1677 

1678 Returns 

1679 ======= 

1680 

1681 Return a tuple with two elements: 

1682 

1683 - The Unicode character that was matched (preceded with a backslash), or 

1684 empty string, 

1685 - a sequence (of 1), name for the match Unicode character, preceded by 

1686 backslash, or empty if no match. 

1687 """ 

1688 if len(text)<2: 

1689 return '', () 

1690 maybe_slash = text[-2] 

1691 if maybe_slash != '\\': 

1692 return '', () 

1693 

1694 char = text[-1] 

1695 # no expand on quote for completion in strings. 

1696 # nor backcomplete standard ascii keys 

1697 if char in string.ascii_letters or char in ('"',"'"): 

1698 return '', () 

1699 try : 

1700 unic = unicodedata.name(char) 

1701 return '\\'+char,('\\'+unic,) 

1702 except KeyError: 

1703 pass 

1704 return '', () 

1705 

1706 

1707@context_matcher() 

1708def back_latex_name_matcher(context: CompletionContext) -> SimpleMatcherResult: 

1709 """Match latex characters back to unicode name 

1710 

1711 This does ``\\ℵ`` -> ``\\aleph`` 

1712 """ 

1713 

1714 text = context.text_until_cursor 

1715 no_match = { 

1716 "completions": [], 

1717 "suppress": False, 

1718 } 

1719 

1720 if len(text)<2: 

1721 return no_match 

1722 maybe_slash = text[-2] 

1723 if maybe_slash != '\\': 

1724 return no_match 

1725 

1726 char = text[-1] 

1727 # no expand on quote for completion in strings. 

1728 # nor backcomplete standard ascii keys 

1729 if char in string.ascii_letters or char in ('"',"'"): 

1730 return no_match 

1731 try : 

1732 latex = reverse_latex_symbol[char] 

1733 # '\\' replace the \ as well 

1734 return { 

1735 "completions": [SimpleCompletion(text=latex, type="latex")], 

1736 "suppress": True, 

1737 "matched_fragment": "\\" + char, 

1738 } 

1739 except KeyError: 

1740 pass 

1741 

1742 return no_match 

1743 

1744def _formatparamchildren(parameter) -> str: 

1745 """ 

1746 Get parameter name and value from Jedi Private API 

1747 

1748 Jedi does not expose a simple way to get `param=value` from its API. 

1749 

1750 Parameters 

1751 ---------- 

1752 parameter 

1753 Jedi's function `Param` 

1754 

1755 Returns 

1756 ------- 

1757 A string like 'a', 'b=1', '*args', '**kwargs' 

1758 

1759 """ 

1760 description = parameter.description 

1761 if not description.startswith('param '): 

1762 raise ValueError('Jedi function parameter description have change format.' 

1763 'Expected "param ...", found %r".' % description) 

1764 return description[6:] 

1765 

1766def _make_signature(completion)-> str: 

1767 """ 

1768 Make the signature from a jedi completion 

1769 

1770 Parameters 

1771 ---------- 

1772 completion : jedi.Completion 

1773 object does not complete a function type 

1774 

1775 Returns 

1776 ------- 

1777 a string consisting of the function signature, with the parenthesis but 

1778 without the function name. example: 

1779 `(a, *args, b=1, **kwargs)` 

1780 

1781 """ 

1782 

1783 # it looks like this might work on jedi 0.17 

1784 if hasattr(completion, 'get_signatures'): 

1785 signatures = completion.get_signatures() 

1786 if not signatures: 

1787 return '(?)' 

1788 

1789 c0 = completion.get_signatures()[0] 

1790 return '('+c0.to_string().split('(', maxsplit=1)[1] 

1791 

1792 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures() 

1793 for p in signature.defined_names()) if f]) 

1794 

1795 

1796_CompleteResult = dict[str, MatcherResult] 

1797 

1798 

1799DICT_MATCHER_REGEX = re.compile( 

1800 r"""(?x) 

1801( # match dict-referring - or any get item object - expression 

1802 .+ 

1803) 

1804\[ # open bracket 

1805\s* # and optional whitespace 

1806# Capture any number of serializable objects (e.g. "a", "b", 'c') 

1807# and slices 

1808((?:(?: 

1809 (?: # closed string 

1810 [uUbB]? # string prefix (r not handled) 

1811 (?: 

1812 '(?:[^']|(?<!\\)\\')*' 

1813 | 

1814 "(?:[^"]|(?<!\\)\\")*" 

1815 ) 

1816 ) 

1817 | 

1818 # capture integers and slices 

1819 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2} 

1820 | 

1821 # integer in bin/hex/oct notation 

1822 0[bBxXoO]_?(?:\w|\d)+ 

1823 ) 

1824 \s*,\s* 

1825)*) 

1826((?: 

1827 (?: # unclosed string 

1828 [uUbB]? # string prefix (r not handled) 

1829 (?: 

1830 '(?:[^']|(?<!\\)\\')* 

1831 | 

1832 "(?:[^"]|(?<!\\)\\")* 

1833 ) 

1834 ) 

1835 | 

1836 # unfinished integer 

1837 (?:[-+]?\d+) 

1838 | 

1839 # integer in bin/hex/oct notation 

1840 0[bBxXoO]_?(?:\w|\d)+ 

1841 ) 

1842)? 

1843$ 

1844""" 

1845) 

1846 

1847 

1848def _convert_matcher_v1_result_to_v2_no_no( 

1849 matches: Sequence[str], 

1850 type: str, 

1851) -> SimpleMatcherResult: 

1852 """same as _convert_matcher_v1_result_to_v2 but fragment=None, and suppress_if_matches is False by construction""" 

1853 return SimpleMatcherResult( 

1854 completions=[SimpleCompletion(text=match, type=type) for match in matches], 

1855 suppress=False, 

1856 ) 

1857 

1858 

1859def _convert_matcher_v1_result_to_v2( 

1860 matches: Sequence[str], 

1861 type: str, 

1862 fragment: Optional[str] = None, 

1863 suppress_if_matches: bool = False, 

1864) -> SimpleMatcherResult: 

1865 """Utility to help with transition""" 

1866 result = { 

1867 "completions": [SimpleCompletion(text=match, type=type) for match in matches], 

1868 "suppress": (True if matches else False) if suppress_if_matches else False, 

1869 } 

1870 if fragment is not None: 

1871 result["matched_fragment"] = fragment 

1872 return cast(SimpleMatcherResult, result) 

1873 

1874 

1875class IPCompleter(Completer): 

1876 """Extension of the completer class with IPython-specific features""" 

1877 

1878 @observe('greedy') 

1879 def _greedy_changed(self, change): 

1880 """update the splitter and readline delims when greedy is changed""" 

1881 if change["new"]: 

1882 self.evaluation = "unsafe" 

1883 self.auto_close_dict_keys = True 

1884 self.splitter.delims = GREEDY_DELIMS 

1885 else: 

1886 self.evaluation = "limited" 

1887 self.auto_close_dict_keys = False 

1888 self.splitter.delims = DELIMS 

1889 

1890 dict_keys_only = Bool( 

1891 False, 

1892 help=""" 

1893 Whether to show dict key matches only. 

1894 

1895 (disables all matchers except for `IPCompleter.dict_key_matcher`). 

1896 """, 

1897 ) 

1898 

1899 suppress_competing_matchers = UnionTrait( 

1900 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))], 

1901 default_value=None, 

1902 help=""" 

1903 Whether to suppress completions from other *Matchers*. 

1904 

1905 When set to ``None`` (default) the matchers will attempt to auto-detect 

1906 whether suppression of other matchers is desirable. For example, at 

1907 the beginning of a line followed by `%` we expect a magic completion 

1908 to be the only applicable option, and after ``my_dict['`` we usually 

1909 expect a completion with an existing dictionary key. 

1910 

1911 If you want to disable this heuristic and see completions from all matchers, 

1912 set ``IPCompleter.suppress_competing_matchers = False``. 

1913 To disable the heuristic for specific matchers provide a dictionary mapping: 

1914 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``. 

1915 

1916 Set ``IPCompleter.suppress_competing_matchers = True`` to limit 

1917 completions to the set of matchers with the highest priority; 

1918 this is equivalent to ``IPCompleter.merge_completions`` and 

1919 can be beneficial for performance, but will sometimes omit relevant 

1920 candidates from matchers further down the priority list. 

1921 """, 

1922 ).tag(config=True) 

1923 

1924 merge_completions = Bool( 

1925 True, 

1926 help="""Whether to merge completion results into a single list 

1927 

1928 If False, only the completion results from the first non-empty 

1929 completer will be returned. 

1930 

1931 As of version 8.6.0, setting the value to ``False`` is an alias for: 

1932 ``IPCompleter.suppress_competing_matchers = True.``. 

1933 """, 

1934 ).tag(config=True) 

1935 

1936 disable_matchers = ListTrait( 

1937 Unicode(), 

1938 help="""List of matchers to disable. 

1939 

1940 The list should contain matcher identifiers (see :any:`completion_matcher`). 

1941 """, 

1942 ).tag(config=True) 

1943 

1944 omit__names = Enum( 

1945 (0, 1, 2), 

1946 default_value=2, 

1947 help="""Instruct the completer to omit private method names 

1948 

1949 Specifically, when completing on ``object.<tab>``. 

1950 

1951 When 2 [default]: all names that start with '_' will be excluded. 

1952 

1953 When 1: all 'magic' names (``__foo__``) will be excluded. 

1954 

1955 When 0: nothing will be excluded. 

1956 """ 

1957 ).tag(config=True) 

1958 limit_to__all__ = Bool(False, 

1959 help=""" 

1960 DEPRECATED as of version 5.0. 

1961 

1962 Instruct the completer to use __all__ for the completion 

1963 

1964 Specifically, when completing on ``object.<tab>``. 

1965 

1966 When True: only those names in obj.__all__ will be included. 

1967 

1968 When False [default]: the __all__ attribute is ignored 

1969 """, 

1970 ).tag(config=True) 

1971 

1972 profile_completions = Bool( 

1973 default_value=False, 

1974 help="If True, emit profiling data for completion subsystem using cProfile." 

1975 ).tag(config=True) 

1976 

1977 profiler_output_dir = Unicode( 

1978 default_value=".completion_profiles", 

1979 help="Template for path at which to output profile data for completions." 

1980 ).tag(config=True) 

1981 

1982 @observe('limit_to__all__') 

1983 def _limit_to_all_changed(self, change): 

1984 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration ' 

1985 'value has been deprecated since IPython 5.0, will be made to have ' 

1986 'no effects and then removed in future version of IPython.', 

1987 UserWarning) 

1988 

1989 def __init__( 

1990 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs 

1991 ): 

1992 """IPCompleter() -> completer 

1993 

1994 Return a completer object. 

1995 

1996 Parameters 

1997 ---------- 

1998 shell 

1999 a pointer to the ipython shell itself. This is needed 

2000 because this completer knows about magic functions, and those can 

2001 only be accessed via the ipython instance. 

2002 namespace : dict, optional 

2003 an optional dict where completions are performed. 

2004 global_namespace : dict, optional 

2005 secondary optional dict for completions, to 

2006 handle cases (such as IPython embedded inside functions) where 

2007 both Python scopes are visible. 

2008 config : Config 

2009 traitlet's config object 

2010 **kwargs 

2011 passed to super class unmodified. 

2012 """ 

2013 

2014 self.magic_escape = ESC_MAGIC 

2015 self.splitter = CompletionSplitter() 

2016 

2017 # _greedy_changed() depends on splitter and readline being defined: 

2018 super().__init__( 

2019 namespace=namespace, 

2020 global_namespace=global_namespace, 

2021 config=config, 

2022 **kwargs, 

2023 ) 

2024 

2025 # List where completion matches will be stored 

2026 self.matches = [] 

2027 self.shell = shell 

2028 # Regexp to split filenames with spaces in them 

2029 self.space_name_re = re.compile(r'([^\\] )') 

2030 # Hold a local ref. to glob.glob for speed 

2031 self.glob = glob.glob 

2032 

2033 # Determine if we are running on 'dumb' terminals, like (X)Emacs 

2034 # buffers, to avoid completion problems. 

2035 term = os.environ.get('TERM','xterm') 

2036 self.dumb_terminal = term in ['dumb','emacs'] 

2037 

2038 # Special handling of backslashes needed in win32 platforms 

2039 if sys.platform == "win32": 

2040 self.clean_glob = self._clean_glob_win32 

2041 else: 

2042 self.clean_glob = self._clean_glob 

2043 

2044 #regexp to parse docstring for function signature 

2045 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2046 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2047 #use this if positional argument name is also needed 

2048 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)') 

2049 

2050 self.magic_arg_matchers = [ 

2051 self.magic_config_matcher, 

2052 self.magic_color_matcher, 

2053 ] 

2054 

2055 # This is set externally by InteractiveShell 

2056 self.custom_completers = None 

2057 

2058 # This is a list of names of unicode characters that can be completed 

2059 # into their corresponding unicode value. The list is large, so we 

2060 # lazily initialize it on first use. Consuming code should access this 

2061 # attribute through the `@unicode_names` property. 

2062 self._unicode_names = None 

2063 

2064 self._backslash_combining_matchers = [ 

2065 self.latex_name_matcher, 

2066 self.unicode_name_matcher, 

2067 back_latex_name_matcher, 

2068 back_unicode_name_matcher, 

2069 self.fwd_unicode_matcher, 

2070 ] 

2071 

2072 if not self.backslash_combining_completions: 

2073 for matcher in self._backslash_combining_matchers: 

2074 self.disable_matchers.append(_get_matcher_id(matcher)) 

2075 

2076 if not self.merge_completions: 

2077 self.suppress_competing_matchers = True 

2078 

2079 @property 

2080 def matchers(self) -> list[Matcher]: 

2081 """All active matcher routines for completion""" 

2082 if self.dict_keys_only: 

2083 return [self.dict_key_matcher] 

2084 

2085 if self.use_jedi: 

2086 return [ 

2087 *self.custom_matchers, 

2088 *self._backslash_combining_matchers, 

2089 *self.magic_arg_matchers, 

2090 self.custom_completer_matcher, 

2091 self.magic_matcher, 

2092 self._jedi_matcher, 

2093 self.dict_key_matcher, 

2094 self.file_matcher, 

2095 ] 

2096 else: 

2097 return [ 

2098 *self.custom_matchers, 

2099 *self._backslash_combining_matchers, 

2100 *self.magic_arg_matchers, 

2101 self.custom_completer_matcher, 

2102 self.dict_key_matcher, 

2103 self.magic_matcher, 

2104 self.python_matcher, 

2105 self.file_matcher, 

2106 self.python_func_kw_matcher, 

2107 ] 

2108 

2109 def all_completions(self, text: str) -> list[str]: 

2110 """ 

2111 Wrapper around the completion methods for the benefit of emacs. 

2112 """ 

2113 prefix = text.rpartition('.')[0] 

2114 with provisionalcompleter(): 

2115 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text 

2116 for c in self.completions(text, len(text))] 

2117 

2118 return self.complete(text)[1] 

2119 

2120 def _clean_glob(self, text:str): 

2121 return self.glob("%s*" % text) 

2122 

2123 def _clean_glob_win32(self, text:str): 

2124 return [f.replace("\\","/") 

2125 for f in self.glob("%s*" % text)] 

2126 

2127 @context_matcher() 

2128 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2129 """Match filenames, expanding ~USER type strings. 

2130 

2131 Most of the seemingly convoluted logic in this completer is an 

2132 attempt to handle filenames with spaces in them. And yet it's not 

2133 quite perfect, because Python's readline doesn't expose all of the 

2134 GNU readline details needed for this to be done correctly. 

2135 

2136 For a filename with a space in it, the printed completions will be 

2137 only the parts after what's already been typed (instead of the 

2138 full completions, as is normally done). I don't think with the 

2139 current (as of Python 2.3) Python readline it's possible to do 

2140 better. 

2141 """ 

2142 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter, 

2143 # starts with `/home/`, `C:\`, etc) 

2144 

2145 text = context.token 

2146 

2147 # chars that require escaping with backslash - i.e. chars 

2148 # that readline treats incorrectly as delimiters, but we 

2149 # don't want to treat as delimiters in filename matching 

2150 # when escaped with backslash 

2151 if text.startswith('!'): 

2152 text = text[1:] 

2153 text_prefix = u'!' 

2154 else: 

2155 text_prefix = u'' 

2156 

2157 text_until_cursor = self.text_until_cursor 

2158 # track strings with open quotes 

2159 open_quotes = has_open_quotes(text_until_cursor) 

2160 

2161 if '(' in text_until_cursor or '[' in text_until_cursor: 

2162 lsplit = text 

2163 else: 

2164 try: 

2165 # arg_split ~ shlex.split, but with unicode bugs fixed by us 

2166 lsplit = arg_split(text_until_cursor)[-1] 

2167 except ValueError: 

2168 # typically an unmatched ", or backslash without escaped char. 

2169 if open_quotes: 

2170 lsplit = text_until_cursor.split(open_quotes)[-1] 

2171 else: 

2172 return { 

2173 "completions": [], 

2174 "suppress": False, 

2175 } 

2176 except IndexError: 

2177 # tab pressed on empty line 

2178 lsplit = "" 

2179 

2180 if not open_quotes and lsplit != protect_filename(lsplit): 

2181 # if protectables are found, do matching on the whole escaped name 

2182 has_protectables = True 

2183 text0,text = text,lsplit 

2184 else: 

2185 has_protectables = False 

2186 text = os.path.expanduser(text) 

2187 

2188 if text == "": 

2189 return { 

2190 "completions": [ 

2191 SimpleCompletion( 

2192 text=text_prefix + protect_filename(f), type="path" 

2193 ) 

2194 for f in self.glob("*") 

2195 ], 

2196 "suppress": False, 

2197 } 

2198 

2199 # Compute the matches from the filesystem 

2200 if sys.platform == 'win32': 

2201 m0 = self.clean_glob(text) 

2202 else: 

2203 m0 = self.clean_glob(text.replace('\\', '')) 

2204 

2205 if has_protectables: 

2206 # If we had protectables, we need to revert our changes to the 

2207 # beginning of filename so that we don't double-write the part 

2208 # of the filename we have so far 

2209 len_lsplit = len(lsplit) 

2210 matches = [text_prefix + text0 + 

2211 protect_filename(f[len_lsplit:]) for f in m0] 

2212 else: 

2213 if open_quotes: 

2214 # if we have a string with an open quote, we don't need to 

2215 # protect the names beyond the quote (and we _shouldn't_, as 

2216 # it would cause bugs when the filesystem call is made). 

2217 matches = m0 if sys.platform == "win32" else\ 

2218 [protect_filename(f, open_quotes) for f in m0] 

2219 else: 

2220 matches = [text_prefix + 

2221 protect_filename(f) for f in m0] 

2222 

2223 # Mark directories in input list by appending '/' to their names. 

2224 return { 

2225 "completions": [ 

2226 SimpleCompletion(text=x + "/" if os.path.isdir(x) else x, type="path") 

2227 for x in matches 

2228 ], 

2229 "suppress": False, 

2230 } 

2231 

2232 @context_matcher() 

2233 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2234 """Match magics.""" 

2235 

2236 # Get all shell magics now rather than statically, so magics loaded at 

2237 # runtime show up too. 

2238 text = context.token 

2239 lsm = self.shell.magics_manager.lsmagic() 

2240 line_magics = lsm['line'] 

2241 cell_magics = lsm['cell'] 

2242 pre = self.magic_escape 

2243 pre2 = pre+pre 

2244 

2245 explicit_magic = text.startswith(pre) 

2246 

2247 # Completion logic: 

2248 # - user gives %%: only do cell magics 

2249 # - user gives %: do both line and cell magics 

2250 # - no prefix: do both 

2251 # In other words, line magics are skipped if the user gives %% explicitly 

2252 # 

2253 # We also exclude magics that match any currently visible names: 

2254 # https://github.com/ipython/ipython/issues/4877, unless the user has 

2255 # typed a %: 

2256 # https://github.com/ipython/ipython/issues/10754 

2257 bare_text = text.lstrip(pre) 

2258 global_matches = self.global_matches(bare_text) 

2259 if not explicit_magic: 

2260 def matches(magic): 

2261 """ 

2262 Filter magics, in particular remove magics that match 

2263 a name present in global namespace. 

2264 """ 

2265 return ( magic.startswith(bare_text) and 

2266 magic not in global_matches ) 

2267 else: 

2268 def matches(magic): 

2269 return magic.startswith(bare_text) 

2270 

2271 completions = [pre2 + m for m in cell_magics if matches(m)] 

2272 if not text.startswith(pre2): 

2273 completions += [pre + m for m in line_magics if matches(m)] 

2274 

2275 is_magic_prefix = len(text) > 0 and text[0] == "%" 

2276 

2277 return { 

2278 "completions": [ 

2279 SimpleCompletion(text=comp, type="magic") for comp in completions 

2280 ], 

2281 "suppress": is_magic_prefix and len(completions) > 0, 

2282 } 

2283 

2284 @context_matcher() 

2285 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2286 """Match class names and attributes for %config magic.""" 

2287 # NOTE: uses `line_buffer` equivalent for compatibility 

2288 matches = self.magic_config_matches(context.line_with_cursor) 

2289 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2290 

2291 def magic_config_matches(self, text: str) -> list[str]: 

2292 """Match class names and attributes for %config magic. 

2293 

2294 .. deprecated:: 8.6 

2295 You can use :meth:`magic_config_matcher` instead. 

2296 """ 

2297 texts = text.strip().split() 

2298 

2299 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'): 

2300 # get all configuration classes 

2301 classes = sorted(set([ c for c in self.shell.configurables 

2302 if c.__class__.class_traits(config=True) 

2303 ]), key=lambda x: x.__class__.__name__) 

2304 classnames = [ c.__class__.__name__ for c in classes ] 

2305 

2306 # return all classnames if config or %config is given 

2307 if len(texts) == 1: 

2308 return classnames 

2309 

2310 # match classname 

2311 classname_texts = texts[1].split('.') 

2312 classname = classname_texts[0] 

2313 classname_matches = [ c for c in classnames 

2314 if c.startswith(classname) ] 

2315 

2316 # return matched classes or the matched class with attributes 

2317 if texts[1].find('.') < 0: 

2318 return classname_matches 

2319 elif len(classname_matches) == 1 and \ 

2320 classname_matches[0] == classname: 

2321 cls = classes[classnames.index(classname)].__class__ 

2322 help = cls.class_get_help() 

2323 # strip leading '--' from cl-args: 

2324 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help) 

2325 return [ attr.split('=')[0] 

2326 for attr in help.strip().splitlines() 

2327 if attr.startswith(texts[1]) ] 

2328 return [] 

2329 

2330 @context_matcher() 

2331 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2332 """Match color schemes for %colors magic.""" 

2333 text = context.line_with_cursor 

2334 texts = text.split() 

2335 if text.endswith(' '): 

2336 # .split() strips off the trailing whitespace. Add '' back 

2337 # so that: '%colors ' -> ['%colors', ''] 

2338 texts.append('') 

2339 

2340 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'): 

2341 prefix = texts[1] 

2342 return SimpleMatcherResult( 

2343 completions=[ 

2344 SimpleCompletion(color, type="param") 

2345 for color in theme_table.keys() 

2346 if color.startswith(prefix) 

2347 ], 

2348 suppress=False, 

2349 ) 

2350 return SimpleMatcherResult( 

2351 completions=[], 

2352 suppress=False, 

2353 ) 

2354 

2355 @context_matcher(identifier="IPCompleter.jedi_matcher") 

2356 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult: 

2357 matches = self._jedi_matches( 

2358 cursor_column=context.cursor_position, 

2359 cursor_line=context.cursor_line, 

2360 text=context.full_text, 

2361 ) 

2362 return { 

2363 "completions": matches, 

2364 # static analysis should not suppress other matchers 

2365 "suppress": False, 

2366 } 

2367 

2368 def _jedi_matches( 

2369 self, cursor_column: int, cursor_line: int, text: str 

2370 ) -> Iterator[_JediCompletionLike]: 

2371 """ 

2372 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and 

2373 cursor position. 

2374 

2375 Parameters 

2376 ---------- 

2377 cursor_column : int 

2378 column position of the cursor in ``text``, 0-indexed. 

2379 cursor_line : int 

2380 line position of the cursor in ``text``, 0-indexed 

2381 text : str 

2382 text to complete 

2383 

2384 Notes 

2385 ----- 

2386 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion` 

2387 object containing a string with the Jedi debug information attached. 

2388 

2389 .. deprecated:: 8.6 

2390 You can use :meth:`_jedi_matcher` instead. 

2391 """ 

2392 namespaces = [self.namespace] 

2393 if self.global_namespace is not None: 

2394 namespaces.append(self.global_namespace) 

2395 

2396 completion_filter = lambda x:x 

2397 offset = cursor_to_position(text, cursor_line, cursor_column) 

2398 # filter output if we are completing for object members 

2399 if offset: 

2400 pre = text[offset-1] 

2401 if pre == '.': 

2402 if self.omit__names == 2: 

2403 completion_filter = lambda c:not c.name.startswith('_') 

2404 elif self.omit__names == 1: 

2405 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__')) 

2406 elif self.omit__names == 0: 

2407 completion_filter = lambda x:x 

2408 else: 

2409 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names)) 

2410 

2411 interpreter = jedi.Interpreter(text[:offset], namespaces) 

2412 try_jedi = True 

2413 

2414 try: 

2415 # find the first token in the current tree -- if it is a ' or " then we are in a string 

2416 completing_string = False 

2417 try: 

2418 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value')) 

2419 except StopIteration: 

2420 pass 

2421 else: 

2422 # note the value may be ', ", or it may also be ''' or """, or 

2423 # in some cases, """what/you/typed..., but all of these are 

2424 # strings. 

2425 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'} 

2426 

2427 # if we are in a string jedi is likely not the right candidate for 

2428 # now. Skip it. 

2429 try_jedi = not completing_string 

2430 except Exception as e: 

2431 # many of things can go wrong, we are using private API just don't crash. 

2432 if self.debug: 

2433 print("Error detecting if completing a non-finished string :", e, '|') 

2434 

2435 if not try_jedi: 

2436 return iter([]) 

2437 try: 

2438 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1)) 

2439 except Exception as e: 

2440 if self.debug: 

2441 return iter( 

2442 [ 

2443 _FakeJediCompletion( 

2444 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' 

2445 % (e) 

2446 ) 

2447 ] 

2448 ) 

2449 else: 

2450 return iter([]) 

2451 

2452 class _CompletionContextType(enum.Enum): 

2453 ATTRIBUTE = "attribute" # For attribute completion 

2454 GLOBAL = "global" # For global completion 

2455 

2456 def _determine_completion_context(self, line): 

2457 """ 

2458 Determine whether the cursor is in an attribute or global completion context. 

2459 """ 

2460 # Cursor in string/comment → GLOBAL. 

2461 is_string, is_in_expression = self._is_in_string_or_comment(line) 

2462 if is_string and not is_in_expression: 

2463 return self._CompletionContextType.GLOBAL 

2464 

2465 # If we're in a template string expression, handle specially 

2466 if is_string and is_in_expression: 

2467 # Extract the expression part - look for the last { that isn't closed 

2468 expr_start = line.rfind("{") 

2469 if expr_start >= 0: 

2470 # We're looking at the expression inside a template string 

2471 expr = line[expr_start + 1 :] 

2472 # Recursively determine the context of the expression 

2473 return self._determine_completion_context(expr) 

2474 

2475 # Handle plain number literals - should be global context 

2476 # Ex: 3. -42.14 but not 3.1. 

2477 if re.search(r"(?<!\w)(?<!\d\.)([-+]?\d+\.(\d+)?)(?!\w)$", line): 

2478 return self._CompletionContextType.GLOBAL 

2479 

2480 # Handle all other attribute matches np.ran, d[0].k, (a,b).count 

2481 chain_match = re.search(r".*(.+\.(?:[a-zA-Z]\w*)?)$", line) 

2482 if chain_match: 

2483 return self._CompletionContextType.ATTRIBUTE 

2484 

2485 return self._CompletionContextType.GLOBAL 

2486 

2487 def _is_in_string_or_comment(self, text): 

2488 """ 

2489 Determine if the cursor is inside a string or comment. 

2490 Returns (is_string, is_in_expression) tuple: 

2491 - is_string: True if in any kind of string 

2492 - is_in_expression: True if inside an f-string/t-string expression 

2493 """ 

2494 in_single_quote = False 

2495 in_double_quote = False 

2496 in_triple_single = False 

2497 in_triple_double = False 

2498 in_template_string = False # Covers both f-strings and t-strings 

2499 in_expression = False # For expressions in f/t-strings 

2500 expression_depth = 0 # Track nested braces in expressions 

2501 i = 0 

2502 

2503 while i < len(text): 

2504 # Check for f-string or t-string start 

2505 if ( 

2506 i + 1 < len(text) 

2507 and text[i] in ("f", "t") 

2508 and (text[i + 1] == '"' or text[i + 1] == "'") 

2509 and not ( 

2510 in_single_quote 

2511 or in_double_quote 

2512 or in_triple_single 

2513 or in_triple_double 

2514 ) 

2515 ): 

2516 in_template_string = True 

2517 i += 1 # Skip the 'f' or 't' 

2518 

2519 # Handle triple quotes 

2520 if i + 2 < len(text): 

2521 if ( 

2522 text[i : i + 3] == '"""' 

2523 and not in_single_quote 

2524 and not in_triple_single 

2525 ): 

2526 in_triple_double = not in_triple_double 

2527 if not in_triple_double: 

2528 in_template_string = False 

2529 i += 3 

2530 continue 

2531 if ( 

2532 text[i : i + 3] == "'''" 

2533 and not in_double_quote 

2534 and not in_triple_double 

2535 ): 

2536 in_triple_single = not in_triple_single 

2537 if not in_triple_single: 

2538 in_template_string = False 

2539 i += 3 

2540 continue 

2541 

2542 # Handle escapes 

2543 if text[i] == "\\" and i + 1 < len(text): 

2544 i += 2 

2545 continue 

2546 

2547 # Handle nested braces within f-strings 

2548 if in_template_string: 

2549 # Special handling for consecutive opening braces 

2550 if i + 1 < len(text) and text[i : i + 2] == "{{": 

2551 i += 2 

2552 continue 

2553 

2554 # Detect start of an expression 

2555 if text[i] == "{": 

2556 # Only increment depth and mark as expression if not already in an expression 

2557 # or if we're at a top-level nested brace 

2558 if not in_expression or (in_expression and expression_depth == 0): 

2559 in_expression = True 

2560 expression_depth += 1 

2561 i += 1 

2562 continue 

2563 

2564 # Detect end of an expression 

2565 if text[i] == "}": 

2566 expression_depth -= 1 

2567 if expression_depth <= 0: 

2568 in_expression = False 

2569 expression_depth = 0 

2570 i += 1 

2571 continue 

2572 

2573 in_triple_quote = in_triple_single or in_triple_double 

2574 

2575 # Handle quotes - also reset template string when closing quotes are encountered 

2576 if text[i] == '"' and not in_single_quote and not in_triple_quote: 

2577 in_double_quote = not in_double_quote 

2578 if not in_double_quote and not in_triple_quote: 

2579 in_template_string = False 

2580 elif text[i] == "'" and not in_double_quote and not in_triple_quote: 

2581 in_single_quote = not in_single_quote 

2582 if not in_single_quote and not in_triple_quote: 

2583 in_template_string = False 

2584 

2585 # Check for comment 

2586 if text[i] == "#" and not ( 

2587 in_single_quote or in_double_quote or in_triple_quote 

2588 ): 

2589 return True, False 

2590 

2591 i += 1 

2592 

2593 is_string = ( 

2594 in_single_quote or in_double_quote or in_triple_single or in_triple_double 

2595 ) 

2596 

2597 # Return tuple (is_string, is_in_expression) 

2598 return ( 

2599 is_string or (in_template_string and not in_expression), 

2600 in_expression and expression_depth > 0, 

2601 ) 

2602 

2603 @context_matcher() 

2604 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2605 """Match attributes or global python names""" 

2606 text = context.text_until_cursor 

2607 completion_type = self._determine_completion_context(text) 

2608 if completion_type == self._CompletionContextType.ATTRIBUTE: 

2609 try: 

2610 matches, fragment = self._attr_matches(text, include_prefix=False) 

2611 if text.endswith(".") and self.omit__names: 

2612 if self.omit__names == 1: 

2613 # true if txt is _not_ a __ name, false otherwise: 

2614 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None 

2615 else: 

2616 # true if txt is _not_ a _ name, false otherwise: 

2617 no__name = ( 

2618 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :]) 

2619 is None 

2620 ) 

2621 matches = filter(no__name, matches) 

2622 return _convert_matcher_v1_result_to_v2( 

2623 matches, type="attribute", fragment=fragment 

2624 ) 

2625 except NameError: 

2626 # catches <undefined attributes>.<tab> 

2627 return SimpleMatcherResult(completions=[], suppress=False) 

2628 else: 

2629 matches = self.global_matches(context.token) 

2630 # TODO: maybe distinguish between functions, modules and just "variables" 

2631 return SimpleMatcherResult( 

2632 completions=[ 

2633 SimpleCompletion(text=match, type="variable") for match in matches 

2634 ], 

2635 suppress=False, 

2636 ) 

2637 

2638 @completion_matcher(api_version=1) 

2639 def python_matches(self, text: str) -> Iterable[str]: 

2640 """Match attributes or global python names. 

2641 

2642 .. deprecated:: 8.27 

2643 You can use :meth:`python_matcher` instead.""" 

2644 if "." in text: 

2645 try: 

2646 matches = self.attr_matches(text) 

2647 if text.endswith('.') and self.omit__names: 

2648 if self.omit__names == 1: 

2649 # true if txt is _not_ a __ name, false otherwise: 

2650 no__name = (lambda txt: 

2651 re.match(r'.*\.__.*?__',txt) is None) 

2652 else: 

2653 # true if txt is _not_ a _ name, false otherwise: 

2654 no__name = (lambda txt: 

2655 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None) 

2656 matches = filter(no__name, matches) 

2657 except NameError: 

2658 # catches <undefined attributes>.<tab> 

2659 matches = [] 

2660 else: 

2661 matches = self.global_matches(text) 

2662 return matches 

2663 

2664 def _default_arguments_from_docstring(self, doc): 

2665 """Parse the first line of docstring for call signature. 

2666 

2667 Docstring should be of the form 'min(iterable[, key=func])\n'. 

2668 It can also parse cython docstring of the form 

2669 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'. 

2670 """ 

2671 if doc is None: 

2672 return [] 

2673 

2674 #care only the firstline 

2675 line = doc.lstrip().splitlines()[0] 

2676 

2677 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2678 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]' 

2679 sig = self.docstring_sig_re.search(line) 

2680 if sig is None: 

2681 return [] 

2682 # iterable[, key=func]' -> ['iterable[' ,' key=func]'] 

2683 sig = sig.groups()[0].split(',') 

2684 ret = [] 

2685 for s in sig: 

2686 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2687 ret += self.docstring_kwd_re.findall(s) 

2688 return ret 

2689 

2690 def _default_arguments(self, obj): 

2691 """Return the list of default arguments of obj if it is callable, 

2692 or empty list otherwise.""" 

2693 call_obj = obj 

2694 ret = [] 

2695 if inspect.isbuiltin(obj): 

2696 pass 

2697 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)): 

2698 if inspect.isclass(obj): 

2699 #for cython embedsignature=True the constructor docstring 

2700 #belongs to the object itself not __init__ 

2701 ret += self._default_arguments_from_docstring( 

2702 getattr(obj, '__doc__', '')) 

2703 # for classes, check for __init__,__new__ 

2704 call_obj = (getattr(obj, '__init__', None) or 

2705 getattr(obj, '__new__', None)) 

2706 # for all others, check if they are __call__able 

2707 elif hasattr(obj, '__call__'): 

2708 call_obj = obj.__call__ 

2709 ret += self._default_arguments_from_docstring( 

2710 getattr(call_obj, '__doc__', '')) 

2711 

2712 _keeps = (inspect.Parameter.KEYWORD_ONLY, 

2713 inspect.Parameter.POSITIONAL_OR_KEYWORD) 

2714 

2715 try: 

2716 sig = inspect.signature(obj) 

2717 ret.extend(k for k, v in sig.parameters.items() if 

2718 v.kind in _keeps) 

2719 except ValueError: 

2720 pass 

2721 

2722 return list(set(ret)) 

2723 

2724 @context_matcher() 

2725 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2726 """Match named parameters (kwargs) of the last open function.""" 

2727 matches = self.python_func_kw_matches(context.token) 

2728 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2729 

2730 def python_func_kw_matches(self, text): 

2731 """Match named parameters (kwargs) of the last open function. 

2732 

2733 .. deprecated:: 8.6 

2734 You can use :meth:`python_func_kw_matcher` instead. 

2735 """ 

2736 

2737 if "." in text: # a parameter cannot be dotted 

2738 return [] 

2739 try: regexp = self.__funcParamsRegex 

2740 except AttributeError: 

2741 regexp = self.__funcParamsRegex = re.compile(r''' 

2742 '.*?(?<!\\)' | # single quoted strings or 

2743 ".*?(?<!\\)" | # double quoted strings or 

2744 \w+ | # identifier 

2745 \S # other characters 

2746 ''', re.VERBOSE | re.DOTALL) 

2747 # 1. find the nearest identifier that comes before an unclosed 

2748 # parenthesis before the cursor 

2749 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo" 

2750 tokens = regexp.findall(self.text_until_cursor) 

2751 iterTokens = reversed(tokens) 

2752 openPar = 0 

2753 

2754 for token in iterTokens: 

2755 if token == ')': 

2756 openPar -= 1 

2757 elif token == '(': 

2758 openPar += 1 

2759 if openPar > 0: 

2760 # found the last unclosed parenthesis 

2761 break 

2762 else: 

2763 return [] 

2764 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" ) 

2765 ids = [] 

2766 isId = re.compile(r'\w+$').match 

2767 

2768 while True: 

2769 try: 

2770 ids.append(next(iterTokens)) 

2771 if not isId(ids[-1]): 

2772 ids.pop() 

2773 break 

2774 if not next(iterTokens) == '.': 

2775 break 

2776 except StopIteration: 

2777 break 

2778 

2779 # Find all named arguments already assigned to, as to avoid suggesting 

2780 # them again 

2781 usedNamedArgs = set() 

2782 par_level = -1 

2783 for token, next_token in zip(tokens, tokens[1:]): 

2784 if token == '(': 

2785 par_level += 1 

2786 elif token == ')': 

2787 par_level -= 1 

2788 

2789 if par_level != 0: 

2790 continue 

2791 

2792 if next_token != '=': 

2793 continue 

2794 

2795 usedNamedArgs.add(token) 

2796 

2797 argMatches = [] 

2798 try: 

2799 callableObj = '.'.join(ids[::-1]) 

2800 namedArgs = self._default_arguments(eval(callableObj, 

2801 self.namespace)) 

2802 

2803 # Remove used named arguments from the list, no need to show twice 

2804 for namedArg in set(namedArgs) - usedNamedArgs: 

2805 if namedArg.startswith(text): 

2806 argMatches.append("%s=" %namedArg) 

2807 except: 

2808 pass 

2809 

2810 return argMatches 

2811 

2812 @staticmethod 

2813 def _get_keys(obj: Any) -> list[Any]: 

2814 # Objects can define their own completions by defining an 

2815 # _ipy_key_completions_() method. 

2816 method = get_real_method(obj, '_ipython_key_completions_') 

2817 if method is not None: 

2818 return method() 

2819 

2820 # Special case some common in-memory dict-like types 

2821 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"): 

2822 try: 

2823 return list(obj.keys()) 

2824 except Exception: 

2825 return [] 

2826 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"): 

2827 try: 

2828 return list(obj.obj.keys()) 

2829 except Exception: 

2830 return [] 

2831 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\ 

2832 _safe_isinstance(obj, 'numpy', 'void'): 

2833 return obj.dtype.names or [] 

2834 return [] 

2835 

2836 @context_matcher() 

2837 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2838 """Match string keys in a dictionary, after e.g. ``foo[``.""" 

2839 matches = self.dict_key_matches(context.token) 

2840 return _convert_matcher_v1_result_to_v2( 

2841 matches, type="dict key", suppress_if_matches=True 

2842 ) 

2843 

2844 def dict_key_matches(self, text: str) -> list[str]: 

2845 """Match string keys in a dictionary, after e.g. ``foo[``. 

2846 

2847 .. deprecated:: 8.6 

2848 You can use :meth:`dict_key_matcher` instead. 

2849 """ 

2850 

2851 # Short-circuit on closed dictionary (regular expression would 

2852 # not match anyway, but would take quite a while). 

2853 if self.text_until_cursor.strip().endswith("]"): 

2854 return [] 

2855 

2856 match = DICT_MATCHER_REGEX.search(self.text_until_cursor) 

2857 

2858 if match is None: 

2859 return [] 

2860 

2861 expr, prior_tuple_keys, key_prefix = match.groups() 

2862 

2863 obj = self._evaluate_expr(expr) 

2864 

2865 if obj is not_found: 

2866 return [] 

2867 

2868 keys = self._get_keys(obj) 

2869 if not keys: 

2870 return keys 

2871 

2872 tuple_prefix = guarded_eval( 

2873 prior_tuple_keys, 

2874 EvaluationContext( 

2875 globals=self.global_namespace, 

2876 locals=self.namespace, 

2877 evaluation=self.evaluation, # type: ignore 

2878 in_subscript=True, 

2879 auto_import=self._auto_import, 

2880 policy_overrides=self.policy_overrides, 

2881 ), 

2882 ) 

2883 

2884 closing_quote, token_offset, matches = match_dict_keys( 

2885 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix 

2886 ) 

2887 if not matches: 

2888 return [] 

2889 

2890 # get the cursor position of 

2891 # - the text being completed 

2892 # - the start of the key text 

2893 # - the start of the completion 

2894 text_start = len(self.text_until_cursor) - len(text) 

2895 if key_prefix: 

2896 key_start = match.start(3) 

2897 completion_start = key_start + token_offset 

2898 else: 

2899 key_start = completion_start = match.end() 

2900 

2901 # grab the leading prefix, to make sure all completions start with `text` 

2902 if text_start > key_start: 

2903 leading = '' 

2904 else: 

2905 leading = text[text_start:completion_start] 

2906 

2907 # append closing quote and bracket as appropriate 

2908 # this is *not* appropriate if the opening quote or bracket is outside 

2909 # the text given to this method, e.g. `d["""a\nt 

2910 can_close_quote = False 

2911 can_close_bracket = False 

2912 

2913 continuation = self.line_buffer[len(self.text_until_cursor) :].strip() 

2914 

2915 if continuation.startswith(closing_quote): 

2916 # do not close if already closed, e.g. `d['a<tab>'` 

2917 continuation = continuation[len(closing_quote) :] 

2918 else: 

2919 can_close_quote = True 

2920 

2921 continuation = continuation.strip() 

2922 

2923 # e.g. `pandas.DataFrame` has different tuple indexer behaviour, 

2924 # handling it is out of scope, so let's avoid appending suffixes. 

2925 has_known_tuple_handling = isinstance(obj, dict) 

2926 

2927 can_close_bracket = ( 

2928 not continuation.startswith("]") and self.auto_close_dict_keys 

2929 ) 

2930 can_close_tuple_item = ( 

2931 not continuation.startswith(",") 

2932 and has_known_tuple_handling 

2933 and self.auto_close_dict_keys 

2934 ) 

2935 can_close_quote = can_close_quote and self.auto_close_dict_keys 

2936 

2937 # fast path if closing quote should be appended but not suffix is allowed 

2938 if not can_close_quote and not can_close_bracket and closing_quote: 

2939 return [leading + k for k in matches] 

2940 

2941 results = [] 

2942 

2943 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM 

2944 

2945 for k, state_flag in matches.items(): 

2946 result = leading + k 

2947 if can_close_quote and closing_quote: 

2948 result += closing_quote 

2949 

2950 if state_flag == end_of_tuple_or_item: 

2951 # We do not know which suffix to add, 

2952 # e.g. both tuple item and string 

2953 # match this item. 

2954 pass 

2955 

2956 if state_flag in end_of_tuple_or_item and can_close_bracket: 

2957 result += "]" 

2958 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item: 

2959 result += ", " 

2960 results.append(result) 

2961 return results 

2962 

2963 @context_matcher() 

2964 def unicode_name_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2965 """Match Latex-like syntax for unicode characters base 

2966 on the name of the character. 

2967 

2968 This does ``\\GREEK SMALL LETTER ETA`` -> ``η`` 

2969 

2970 Works only on valid python 3 identifier, or on combining characters that 

2971 will combine to form a valid identifier. 

2972 """ 

2973 

2974 text = context.text_until_cursor 

2975 

2976 slashpos = text.rfind('\\') 

2977 if slashpos > -1: 

2978 s = text[slashpos+1:] 

2979 try : 

2980 unic = unicodedata.lookup(s) 

2981 # allow combining chars 

2982 if ('a'+unic).isidentifier(): 

2983 return { 

2984 "completions": [SimpleCompletion(text=unic, type="unicode")], 

2985 "suppress": True, 

2986 "matched_fragment": "\\" + s, 

2987 } 

2988 except KeyError: 

2989 pass 

2990 return { 

2991 "completions": [], 

2992 "suppress": False, 

2993 } 

2994 

2995 @context_matcher() 

2996 def latex_name_matcher(self, context: CompletionContext): 

2997 """Match Latex syntax for unicode characters. 

2998 

2999 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3000 """ 

3001 fragment, matches = self.latex_matches(context.text_until_cursor) 

3002 return _convert_matcher_v1_result_to_v2( 

3003 matches, type="latex", fragment=fragment, suppress_if_matches=True 

3004 ) 

3005 

3006 def latex_matches(self, text: str) -> tuple[str, Sequence[str]]: 

3007 """Match Latex syntax for unicode characters. 

3008 

3009 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3010 

3011 .. deprecated:: 8.6 

3012 You can use :meth:`latex_name_matcher` instead. 

3013 """ 

3014 slashpos = text.rfind('\\') 

3015 if slashpos > -1: 

3016 s = text[slashpos:] 

3017 if s in latex_symbols: 

3018 # Try to complete a full latex symbol to unicode 

3019 # \\alpha -> α 

3020 return s, [latex_symbols[s]] 

3021 else: 

3022 # If a user has partially typed a latex symbol, give them 

3023 # a full list of options \al -> [\aleph, \alpha] 

3024 matches = [k for k in latex_symbols if k.startswith(s)] 

3025 if matches: 

3026 return s, matches 

3027 return '', () 

3028 

3029 @context_matcher() 

3030 def custom_completer_matcher(self, context): 

3031 """Dispatch custom completer. 

3032 

3033 If a match is found, suppresses all other matchers except for Jedi. 

3034 """ 

3035 matches = self.dispatch_custom_completer(context.token) or [] 

3036 result = _convert_matcher_v1_result_to_v2( 

3037 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True 

3038 ) 

3039 result["ordered"] = True 

3040 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)} 

3041 return result 

3042 

3043 def dispatch_custom_completer(self, text): 

3044 """ 

3045 .. deprecated:: 8.6 

3046 You can use :meth:`custom_completer_matcher` instead. 

3047 """ 

3048 if not self.custom_completers: 

3049 return 

3050 

3051 line = self.line_buffer 

3052 if not line.strip(): 

3053 return None 

3054 

3055 # Create a little structure to pass all the relevant information about 

3056 # the current completion to any custom completer. 

3057 event = SimpleNamespace() 

3058 event.line = line 

3059 event.symbol = text 

3060 cmd = line.split(None,1)[0] 

3061 event.command = cmd 

3062 event.text_until_cursor = self.text_until_cursor 

3063 

3064 # for foo etc, try also to find completer for %foo 

3065 if not cmd.startswith(self.magic_escape): 

3066 try_magic = self.custom_completers.s_matches( 

3067 self.magic_escape + cmd) 

3068 else: 

3069 try_magic = [] 

3070 

3071 for c in itertools.chain(self.custom_completers.s_matches(cmd), 

3072 try_magic, 

3073 self.custom_completers.flat_matches(self.text_until_cursor)): 

3074 try: 

3075 res = c(event) 

3076 if res: 

3077 # first, try case sensitive match 

3078 withcase = [r for r in res if r.startswith(text)] 

3079 if withcase: 

3080 return withcase 

3081 # if none, then case insensitive ones are ok too 

3082 text_low = text.lower() 

3083 return [r for r in res if r.lower().startswith(text_low)] 

3084 except TryNext: 

3085 pass 

3086 except KeyboardInterrupt: 

3087 """ 

3088 If custom completer take too long, 

3089 let keyboard interrupt abort and return nothing. 

3090 """ 

3091 break 

3092 

3093 return None 

3094 

3095 def completions(self, text: str, offset: int)->Iterator[Completion]: 

3096 """ 

3097 Returns an iterator over the possible completions 

3098 

3099 .. warning:: 

3100 

3101 Unstable 

3102 

3103 This function is unstable, API may change without warning. 

3104 It will also raise unless use in proper context manager. 

3105 

3106 Parameters 

3107 ---------- 

3108 text : str 

3109 Full text of the current input, multi line string. 

3110 offset : int 

3111 Integer representing the position of the cursor in ``text``. Offset 

3112 is 0-based indexed. 

3113 

3114 Yields 

3115 ------ 

3116 Completion 

3117 

3118 Notes 

3119 ----- 

3120 The cursor on a text can either be seen as being "in between" 

3121 characters or "On" a character depending on the interface visible to 

3122 the user. For consistency the cursor being on "in between" characters X 

3123 and Y is equivalent to the cursor being "on" character Y, that is to say 

3124 the character the cursor is on is considered as being after the cursor. 

3125 

3126 Combining characters may span more that one position in the 

3127 text. 

3128 

3129 .. note:: 

3130 

3131 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--`` 

3132 fake Completion token to distinguish completion returned by Jedi 

3133 and usual IPython completion. 

3134 

3135 .. note:: 

3136 

3137 Completions are not completely deduplicated yet. If identical 

3138 completions are coming from different sources this function does not 

3139 ensure that each completion object will only be present once. 

3140 """ 

3141 warnings.warn("_complete is a provisional API (as of IPython 6.0). " 

3142 "It may change without warnings. " 

3143 "Use in corresponding context manager.", 

3144 category=ProvisionalCompleterWarning, stacklevel=2) 

3145 

3146 seen = set() 

3147 profiler:Optional[cProfile.Profile] 

3148 try: 

3149 if self.profile_completions: 

3150 import cProfile 

3151 profiler = cProfile.Profile() 

3152 profiler.enable() 

3153 else: 

3154 profiler = None 

3155 

3156 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000): 

3157 if c and (c in seen): 

3158 continue 

3159 yield c 

3160 seen.add(c) 

3161 except KeyboardInterrupt: 

3162 """if completions take too long and users send keyboard interrupt, 

3163 do not crash and return ASAP. """ 

3164 pass 

3165 finally: 

3166 if profiler is not None: 

3167 profiler.disable() 

3168 ensure_dir_exists(self.profiler_output_dir) 

3169 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4())) 

3170 print("Writing profiler output to", output_path) 

3171 profiler.dump_stats(output_path) 

3172 

3173 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]: 

3174 """ 

3175 Core completion module.Same signature as :any:`completions`, with the 

3176 extra `timeout` parameter (in seconds). 

3177 

3178 Computing jedi's completion ``.type`` can be quite expensive (it is a 

3179 lazy property) and can require some warm-up, more warm up than just 

3180 computing the ``name`` of a completion. The warm-up can be : 

3181 

3182 - Long warm-up the first time a module is encountered after 

3183 install/update: actually build parse/inference tree. 

3184 

3185 - first time the module is encountered in a session: load tree from 

3186 disk. 

3187 

3188 We don't want to block completions for tens of seconds so we give the 

3189 completer a "budget" of ``_timeout`` seconds per invocation to compute 

3190 completions types, the completions that have not yet been computed will 

3191 be marked as "unknown" an will have a chance to be computed next round 

3192 are things get cached. 

3193 

3194 Keep in mind that Jedi is not the only thing treating the completion so 

3195 keep the timeout short-ish as if we take more than 0.3 second we still 

3196 have lots of processing to do. 

3197 

3198 """ 

3199 deadline = time.monotonic() + _timeout 

3200 

3201 before = full_text[:offset] 

3202 cursor_line, cursor_column = position_to_cursor(full_text, offset) 

3203 

3204 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3205 

3206 def is_non_jedi_result( 

3207 result: MatcherResult, identifier: str 

3208 ) -> TypeGuard[SimpleMatcherResult]: 

3209 return identifier != jedi_matcher_id 

3210 

3211 results = self._complete( 

3212 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column 

3213 ) 

3214 

3215 non_jedi_results: dict[str, SimpleMatcherResult] = { 

3216 identifier: result 

3217 for identifier, result in results.items() 

3218 if is_non_jedi_result(result, identifier) 

3219 } 

3220 

3221 jedi_matches = ( 

3222 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"] 

3223 if jedi_matcher_id in results 

3224 else () 

3225 ) 

3226 

3227 iter_jm = iter(jedi_matches) 

3228 if _timeout: 

3229 for jm in iter_jm: 

3230 try: 

3231 type_ = jm.type 

3232 except Exception: 

3233 if self.debug: 

3234 print("Error in Jedi getting type of ", jm) 

3235 type_ = None 

3236 delta = len(jm.name_with_symbols) - len(jm.complete) 

3237 if type_ == 'function': 

3238 signature = _make_signature(jm) 

3239 else: 

3240 signature = '' 

3241 yield Completion(start=offset - delta, 

3242 end=offset, 

3243 text=jm.name_with_symbols, 

3244 type=type_, 

3245 signature=signature, 

3246 _origin='jedi') 

3247 

3248 if time.monotonic() > deadline: 

3249 break 

3250 

3251 for jm in iter_jm: 

3252 delta = len(jm.name_with_symbols) - len(jm.complete) 

3253 yield Completion( 

3254 start=offset - delta, 

3255 end=offset, 

3256 text=jm.name_with_symbols, 

3257 type=_UNKNOWN_TYPE, # don't compute type for speed 

3258 _origin="jedi", 

3259 signature="", 

3260 ) 

3261 

3262 # TODO: 

3263 # Suppress this, right now just for debug. 

3264 if jedi_matches and non_jedi_results and self.debug: 

3265 some_start_offset = before.rfind( 

3266 next(iter(non_jedi_results.values()))["matched_fragment"] 

3267 ) 

3268 yield Completion( 

3269 start=some_start_offset, 

3270 end=offset, 

3271 text="--jedi/ipython--", 

3272 _origin="debug", 

3273 type="none", 

3274 signature="", 

3275 ) 

3276 

3277 ordered: list[Completion] = [] 

3278 sortable: list[Completion] = [] 

3279 

3280 for origin, result in non_jedi_results.items(): 

3281 matched_text = result["matched_fragment"] 

3282 start_offset = before.rfind(matched_text) 

3283 is_ordered = result.get("ordered", False) 

3284 container = ordered if is_ordered else sortable 

3285 

3286 # I'm unsure if this is always true, so let's assert and see if it 

3287 # crash 

3288 assert before.endswith(matched_text) 

3289 

3290 for simple_completion in result["completions"]: 

3291 completion = Completion( 

3292 start=start_offset, 

3293 end=offset, 

3294 text=simple_completion.text, 

3295 _origin=origin, 

3296 signature="", 

3297 type=simple_completion.type or _UNKNOWN_TYPE, 

3298 ) 

3299 container.append(completion) 

3300 

3301 yield from list(self._deduplicate(ordered + self._sort(sortable)))[ 

3302 :MATCHES_LIMIT 

3303 ] 

3304 

3305 def complete( 

3306 self, text=None, line_buffer=None, cursor_pos=None 

3307 ) -> tuple[str, Sequence[str]]: 

3308 """Find completions for the given text and line context. 

3309 

3310 Note that both the text and the line_buffer are optional, but at least 

3311 one of them must be given. 

3312 

3313 Parameters 

3314 ---------- 

3315 text : string, optional 

3316 Text to perform the completion on. If not given, the line buffer 

3317 is split using the instance's CompletionSplitter object. 

3318 line_buffer : string, optional 

3319 If not given, the completer attempts to obtain the current line 

3320 buffer via readline. This keyword allows clients which are 

3321 requesting for text completions in non-readline contexts to inform 

3322 the completer of the entire text. 

3323 cursor_pos : int, optional 

3324 Index of the cursor in the full line buffer. Should be provided by 

3325 remote frontends where kernel has no access to frontend state. 

3326 

3327 Returns 

3328 ------- 

3329 Tuple of two items: 

3330 text : str 

3331 Text that was actually used in the completion. 

3332 matches : list 

3333 A list of completion matches. 

3334 

3335 Notes 

3336 ----- 

3337 This API is likely to be deprecated and replaced by 

3338 :any:`IPCompleter.completions` in the future. 

3339 

3340 """ 

3341 warnings.warn('`Completer.complete` is pending deprecation since ' 

3342 'IPython 6.0 and will be replaced by `Completer.completions`.', 

3343 PendingDeprecationWarning) 

3344 # potential todo, FOLD the 3rd throw away argument of _complete 

3345 # into the first 2 one. 

3346 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?) 

3347 # TODO: should we deprecate now, or does it stay? 

3348 

3349 results = self._complete( 

3350 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0 

3351 ) 

3352 

3353 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3354 

3355 return self._arrange_and_extract( 

3356 results, 

3357 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version? 

3358 skip_matchers={jedi_matcher_id}, 

3359 # this API does not support different start/end positions (fragments of token). 

3360 abort_if_offset_changes=True, 

3361 ) 

3362 

3363 def _arrange_and_extract( 

3364 self, 

3365 results: dict[str, MatcherResult], 

3366 skip_matchers: set[str], 

3367 abort_if_offset_changes: bool, 

3368 ): 

3369 sortable: list[AnyMatcherCompletion] = [] 

3370 ordered: list[AnyMatcherCompletion] = [] 

3371 most_recent_fragment = None 

3372 for identifier, result in results.items(): 

3373 if identifier in skip_matchers: 

3374 continue 

3375 if not result["completions"]: 

3376 continue 

3377 if not most_recent_fragment: 

3378 most_recent_fragment = result["matched_fragment"] 

3379 if ( 

3380 abort_if_offset_changes 

3381 and result["matched_fragment"] != most_recent_fragment 

3382 ): 

3383 break 

3384 if result.get("ordered", False): 

3385 ordered.extend(result["completions"]) 

3386 else: 

3387 sortable.extend(result["completions"]) 

3388 

3389 if not most_recent_fragment: 

3390 most_recent_fragment = "" # to satisfy typechecker (and just in case) 

3391 

3392 return most_recent_fragment, [ 

3393 m.text for m in self._deduplicate(ordered + self._sort(sortable)) 

3394 ] 

3395 

3396 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None, 

3397 full_text=None) -> _CompleteResult: 

3398 """ 

3399 Like complete but can also returns raw jedi completions as well as the 

3400 origin of the completion text. This could (and should) be made much 

3401 cleaner but that will be simpler once we drop the old (and stateful) 

3402 :any:`complete` API. 

3403 

3404 With current provisional API, cursor_pos act both (depending on the 

3405 caller) as the offset in the ``text`` or ``line_buffer``, or as the 

3406 ``column`` when passing multiline strings this could/should be renamed 

3407 but would add extra noise. 

3408 

3409 Parameters 

3410 ---------- 

3411 cursor_line 

3412 Index of the line the cursor is on. 0 indexed. 

3413 cursor_pos 

3414 Position of the cursor in the current line/line_buffer/text. 0 

3415 indexed. 

3416 line_buffer : optional, str 

3417 The current line the cursor is in, this is mostly due to legacy 

3418 reason that readline could only give a us the single current line. 

3419 Prefer `full_text`. 

3420 text : str 

3421 The current "token" the cursor is in, mostly also for historical 

3422 reasons. as the completer would trigger only after the current line 

3423 was parsed. 

3424 full_text : str 

3425 Full text of the current cell. 

3426 

3427 Returns 

3428 ------- 

3429 An ordered dictionary where keys are identifiers of completion 

3430 matchers and values are ``MatcherResult``s. 

3431 """ 

3432 

3433 # if the cursor position isn't given, the only sane assumption we can 

3434 # make is that it's at the end of the line (the common case) 

3435 if cursor_pos is None: 

3436 cursor_pos = len(line_buffer) if text is None else len(text) 

3437 

3438 if self.use_main_ns: 

3439 self.namespace = __main__.__dict__ 

3440 

3441 # if text is either None or an empty string, rely on the line buffer 

3442 if (not line_buffer) and full_text: 

3443 line_buffer = full_text.split('\n')[cursor_line] 

3444 if not text: # issue #11508: check line_buffer before calling split_line 

3445 text = ( 

3446 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else "" 

3447 ) 

3448 

3449 # If no line buffer is given, assume the input text is all there was 

3450 if line_buffer is None: 

3451 line_buffer = text 

3452 

3453 # deprecated - do not use `line_buffer` in new code. 

3454 self.line_buffer = line_buffer 

3455 self.text_until_cursor = self.line_buffer[:cursor_pos] 

3456 

3457 if not full_text: 

3458 full_text = line_buffer 

3459 

3460 context = CompletionContext( 

3461 full_text=full_text, 

3462 cursor_position=cursor_pos, 

3463 cursor_line=cursor_line, 

3464 token=text, 

3465 limit=MATCHES_LIMIT, 

3466 ) 

3467 

3468 # Start with a clean slate of completions 

3469 results: dict[str, MatcherResult] = {} 

3470 

3471 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3472 

3473 suppressed_matchers: set[str] = set() 

3474 

3475 matchers = { 

3476 _get_matcher_id(matcher): matcher 

3477 for matcher in sorted( 

3478 self.matchers, key=_get_matcher_priority, reverse=True 

3479 ) 

3480 } 

3481 

3482 for matcher_id, matcher in matchers.items(): 

3483 matcher_id = _get_matcher_id(matcher) 

3484 

3485 if matcher_id in self.disable_matchers: 

3486 continue 

3487 

3488 if matcher_id in results: 

3489 warnings.warn(f"Duplicate matcher ID: {matcher_id}.") 

3490 

3491 if matcher_id in suppressed_matchers: 

3492 continue 

3493 

3494 result: MatcherResult 

3495 try: 

3496 if _is_matcher_v1(matcher): 

3497 result = _convert_matcher_v1_result_to_v2_no_no( 

3498 matcher(text), type=_UNKNOWN_TYPE 

3499 ) 

3500 elif _is_matcher_v2(matcher): 

3501 result = matcher(context) 

3502 else: 

3503 api_version = _get_matcher_api_version(matcher) 

3504 raise ValueError(f"Unsupported API version {api_version}") 

3505 except BaseException: 

3506 # Show the ugly traceback if the matcher causes an 

3507 # exception, but do NOT crash the kernel! 

3508 sys.excepthook(*sys.exc_info()) 

3509 continue 

3510 

3511 # set default value for matched fragment if suffix was not selected. 

3512 result["matched_fragment"] = result.get("matched_fragment", context.token) 

3513 

3514 if not suppressed_matchers: 

3515 suppression_recommended: Union[bool, set[str]] = result.get( 

3516 "suppress", False 

3517 ) 

3518 

3519 suppression_config = ( 

3520 self.suppress_competing_matchers.get(matcher_id, None) 

3521 if isinstance(self.suppress_competing_matchers, dict) 

3522 else self.suppress_competing_matchers 

3523 ) 

3524 should_suppress = ( 

3525 (suppression_config is True) 

3526 or (suppression_recommended and (suppression_config is not False)) 

3527 ) and has_any_completions(result) 

3528 

3529 if should_suppress: 

3530 suppression_exceptions: set[str] = result.get( 

3531 "do_not_suppress", set() 

3532 ) 

3533 if isinstance(suppression_recommended, Iterable): 

3534 to_suppress = set(suppression_recommended) 

3535 else: 

3536 to_suppress = set(matchers) 

3537 suppressed_matchers = to_suppress - suppression_exceptions 

3538 

3539 new_results = {} 

3540 for previous_matcher_id, previous_result in results.items(): 

3541 if previous_matcher_id not in suppressed_matchers: 

3542 new_results[previous_matcher_id] = previous_result 

3543 results = new_results 

3544 

3545 results[matcher_id] = result 

3546 

3547 _, matches = self._arrange_and_extract( 

3548 results, 

3549 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission? 

3550 # if it was omission, we can remove the filtering step, otherwise remove this comment. 

3551 skip_matchers={jedi_matcher_id}, 

3552 abort_if_offset_changes=False, 

3553 ) 

3554 

3555 # populate legacy stateful API 

3556 self.matches = matches 

3557 

3558 return results 

3559 

3560 @staticmethod 

3561 def _deduplicate( 

3562 matches: Sequence[AnyCompletion], 

3563 ) -> Iterable[AnyCompletion]: 

3564 filtered_matches: dict[str, AnyCompletion] = {} 

3565 for match in matches: 

3566 text = match.text 

3567 if ( 

3568 text not in filtered_matches 

3569 or filtered_matches[text].type == _UNKNOWN_TYPE 

3570 ): 

3571 filtered_matches[text] = match 

3572 

3573 return filtered_matches.values() 

3574 

3575 @staticmethod 

3576 def _sort(matches: Sequence[AnyCompletion]): 

3577 return sorted(matches, key=lambda x: completions_sorting_key(x.text)) 

3578 

3579 @context_matcher() 

3580 def fwd_unicode_matcher(self, context: CompletionContext): 

3581 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API.""" 

3582 # TODO: use `context.limit` to terminate early once we matched the maximum 

3583 # number that will be used downstream; can be added as an optional to 

3584 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here. 

3585 fragment, matches = self.fwd_unicode_match(context.text_until_cursor) 

3586 return _convert_matcher_v1_result_to_v2( 

3587 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

3588 ) 

3589 

3590 def fwd_unicode_match(self, text: str) -> tuple[str, Sequence[str]]: 

3591 """ 

3592 Forward match a string starting with a backslash with a list of 

3593 potential Unicode completions. 

3594 

3595 Will compute list of Unicode character names on first call and cache it. 

3596 

3597 .. deprecated:: 8.6 

3598 You can use :meth:`fwd_unicode_matcher` instead. 

3599 

3600 Returns 

3601 ------- 

3602 At tuple with: 

3603 - matched text (empty if no matches) 

3604 - list of potential completions, empty tuple otherwise) 

3605 """ 

3606 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements. 

3607 # We could do a faster match using a Trie. 

3608 

3609 # Using pygtrie the following seem to work: 

3610 

3611 # s = PrefixSet() 

3612 

3613 # for c in range(0,0x10FFFF + 1): 

3614 # try: 

3615 # s.add(unicodedata.name(chr(c))) 

3616 # except ValueError: 

3617 # pass 

3618 # [''.join(k) for k in s.iter(prefix)] 

3619 

3620 # But need to be timed and adds an extra dependency. 

3621 

3622 slashpos = text.rfind('\\') 

3623 # if text starts with slash 

3624 if slashpos > -1: 

3625 # PERF: It's important that we don't access self._unicode_names 

3626 # until we're inside this if-block. _unicode_names is lazily 

3627 # initialized, and it takes a user-noticeable amount of time to 

3628 # initialize it, so we don't want to initialize it unless we're 

3629 # actually going to use it. 

3630 s = text[slashpos + 1 :] 

3631 sup = s.upper() 

3632 candidates = [x for x in self.unicode_names if x.startswith(sup)] 

3633 if candidates: 

3634 return s, candidates 

3635 candidates = [x for x in self.unicode_names if sup in x] 

3636 if candidates: 

3637 return s, candidates 

3638 splitsup = sup.split(" ") 

3639 candidates = [ 

3640 x for x in self.unicode_names if all(u in x for u in splitsup) 

3641 ] 

3642 if candidates: 

3643 return s, candidates 

3644 

3645 return "", () 

3646 

3647 # if text does not start with slash 

3648 else: 

3649 return '', () 

3650 

3651 @property 

3652 def unicode_names(self) -> list[str]: 

3653 """List of names of unicode code points that can be completed. 

3654 

3655 The list is lazily initialized on first access. 

3656 """ 

3657 if self._unicode_names is None: 

3658 names = [] 

3659 for c in range(0,0x10FFFF + 1): 

3660 try: 

3661 names.append(unicodedata.name(chr(c))) 

3662 except ValueError: 

3663 pass 

3664 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES) 

3665 

3666 return self._unicode_names 

3667 

3668 

3669def _unicode_name_compute(ranges: list[tuple[int, int]]) -> list[str]: 

3670 names = [] 

3671 for start,stop in ranges: 

3672 for c in range(start, stop) : 

3673 try: 

3674 names.append(unicodedata.name(chr(c))) 

3675 except ValueError: 

3676 pass 

3677 return names