Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.11/site-packages/IPython/core/completer.py: 20%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1372 statements  

1"""Completion for IPython. 

2 

3This module started as fork of the rlcompleter module in the Python standard 

4library. The original enhancements made to rlcompleter have been sent 

5upstream and were accepted as of Python 2.3, 

6 

7This module now support a wide variety of completion mechanism both available 

8for normal classic Python code, as well as completer for IPython specific 

9Syntax like magics. 

10 

11Latex and Unicode completion 

12============================ 

13 

14IPython and compatible frontends not only can complete your code, but can help 

15you to input a wide range of characters. In particular we allow you to insert 

16a unicode character using the tab completion mechanism. 

17 

18Forward latex/unicode completion 

19-------------------------------- 

20 

21Forward completion allows you to easily type a unicode character using its latex 

22name, or unicode long description. To do so type a backslash follow by the 

23relevant name and press tab: 

24 

25 

26Using latex completion: 

27 

28.. code:: 

29 

30 \\alpha<tab> 

31 α 

32 

33or using unicode completion: 

34 

35 

36.. code:: 

37 

38 \\GREEK SMALL LETTER ALPHA<tab> 

39 α 

40 

41 

42Only valid Python identifiers will complete. Combining characters (like arrow or 

43dots) are also available, unlike latex they need to be put after the their 

44counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``. 

45 

46Some browsers are known to display combining characters incorrectly. 

47 

48Backward latex completion 

49------------------------- 

50 

51It is sometime challenging to know how to type a character, if you are using 

52IPython, or any compatible frontend you can prepend backslash to the character 

53and press :kbd:`Tab` to expand it to its latex form. 

54 

55.. code:: 

56 

57 \\α<tab> 

58 \\alpha 

59 

60 

61Both forward and backward completions can be deactivated by setting the 

62:std:configtrait:`Completer.backslash_combining_completions` option to 

63``False``. 

64 

65 

66Experimental 

67============ 

68 

69Starting with IPython 6.0, this module can make use of the Jedi library to 

70generate completions both using static analysis of the code, and dynamically 

71inspecting multiple namespaces. Jedi is an autocompletion and static analysis 

72for Python. The APIs attached to this new mechanism is unstable and will 

73raise unless use in an :any:`provisionalcompleter` context manager. 

74 

75You will find that the following are experimental: 

76 

77 - :any:`provisionalcompleter` 

78 - :any:`IPCompleter.completions` 

79 - :any:`Completion` 

80 - :any:`rectify_completions` 

81 

82.. note:: 

83 

84 better name for :any:`rectify_completions` ? 

85 

86We welcome any feedback on these new API, and we also encourage you to try this 

87module in debug mode (start IPython with ``--Completer.debug=True``) in order 

88to have extra logging information if :any:`jedi` is crashing, or if current 

89IPython completer pending deprecations are returning results not yet handled 

90by :any:`jedi` 

91 

92Using Jedi for tab completion allow snippets like the following to work without 

93having to execute any code: 

94 

95 >>> myvar = ['hello', 42] 

96 ... myvar[1].bi<tab> 

97 

98Tab completion will be able to infer that ``myvar[1]`` is a real number without 

99executing almost any code unlike the deprecated :any:`IPCompleter.greedy` 

100option. 

101 

102Be sure to update :any:`jedi` to the latest stable version or to try the 

103current development version to get better completions. 

104 

105Matchers 

106======== 

107 

108All completions routines are implemented using unified *Matchers* API. 

109The matchers API is provisional and subject to change without notice. 

110 

111The built-in matchers include: 

112 

113- :any:`IPCompleter.dict_key_matcher`: dictionary key completions, 

114- :any:`IPCompleter.magic_matcher`: completions for magics, 

115- :any:`IPCompleter.unicode_name_matcher`, 

116 :any:`IPCompleter.fwd_unicode_matcher` 

117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_, 

118- :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_, 

119- :any:`IPCompleter.file_matcher`: paths to files and directories, 

120- :any:`IPCompleter.python_func_kw_matcher` - function keywords, 

121- :any:`IPCompleter.python_matches` - globals and attributes (v1 API), 

122- ``IPCompleter.jedi_matcher`` - static analysis with Jedi, 

123- :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default 

124 implementation in :any:`InteractiveShell` which uses IPython hooks system 

125 (`complete_command`) with string dispatch (including regular expressions). 

126 Differently to other matchers, ``custom_completer_matcher`` will not suppress 

127 Jedi results to match behaviour in earlier IPython versions. 

128 

129Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list. 

130 

131Matcher API 

132----------- 

133 

134Simplifying some details, the ``Matcher`` interface can described as 

135 

136.. code-block:: 

137 

138 MatcherAPIv1 = Callable[[str], list[str]] 

139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult] 

140 

141 Matcher = MatcherAPIv1 | MatcherAPIv2 

142 

143The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0 

144and remains supported as a simplest way for generating completions. This is also 

145currently the only API supported by the IPython hooks system `complete_command`. 

146 

147To distinguish between matcher versions ``matcher_api_version`` attribute is used. 

148More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers, 

149and requires a literal ``2`` for v2 Matchers. 

150 

151Once the API stabilises future versions may relax the requirement for specifying 

152``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore 

153please do not rely on the presence of ``matcher_api_version`` for any purposes. 

154 

155Suppression of competing matchers 

156--------------------------------- 

157 

158By default results from all matchers are combined, in the order determined by 

159their priority. Matchers can request to suppress results from subsequent 

160matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``. 

161 

162When multiple matchers simultaneously request suppression, the results from of 

163the matcher with higher priority will be returned. 

164 

165Sometimes it is desirable to suppress most but not all other matchers; 

166this can be achieved by adding a set of identifiers of matchers which 

167should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key. 

168 

169The suppression behaviour can is user-configurable via 

170:std:configtrait:`IPCompleter.suppress_competing_matchers`. 

171""" 

172 

173 

174# Copyright (c) IPython Development Team. 

175# Distributed under the terms of the Modified BSD License. 

176# 

177# Some of this code originated from rlcompleter in the Python standard library 

178# Copyright (C) 2001 Python Software Foundation, www.python.org 

179 

180from __future__ import annotations 

181import builtins as builtin_mod 

182import enum 

183import glob 

184import inspect 

185import itertools 

186import keyword 

187import ast 

188import os 

189import re 

190import string 

191import sys 

192import tokenize 

193import time 

194import unicodedata 

195import uuid 

196import warnings 

197from ast import literal_eval 

198from collections import defaultdict 

199from contextlib import contextmanager 

200from dataclasses import dataclass 

201from functools import cached_property, partial 

202from types import SimpleNamespace 

203from typing import ( 

204 Iterable, 

205 Iterator, 

206 Union, 

207 Any, 

208 Sequence, 

209 Optional, 

210 TYPE_CHECKING, 

211 Sized, 

212 TypeVar, 

213 Literal, 

214) 

215 

216from IPython.core.guarded_eval import ( 

217 guarded_eval, 

218 EvaluationContext, 

219 _validate_policy_overrides, 

220) 

221from IPython.core.error import TryNext 

222from IPython.core.inputtransformer2 import ESC_MAGIC 

223from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol 

224from IPython.testing.skipdoctest import skip_doctest 

225from IPython.utils import generics 

226from IPython.utils.PyColorize import theme_table 

227from IPython.utils.decorators import sphinx_options 

228from IPython.utils.dir2 import dir2, get_real_method 

229from IPython.utils.path import ensure_dir_exists 

230from IPython.utils.process import arg_split 

231from traitlets import ( 

232 Bool, 

233 Enum, 

234 Int, 

235 List as ListTrait, 

236 Unicode, 

237 Dict as DictTrait, 

238 DottedObjectName, 

239 Union as UnionTrait, 

240 observe, 

241) 

242from traitlets.config.configurable import Configurable 

243from traitlets.utils.importstring import import_item 

244 

245import __main__ 

246 

247from typing import cast 

248 

249if sys.version_info < (3, 12): 

250 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

251else: 

252 from typing import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

253 

254 

255# skip module docstests 

256__skip_doctest__ = True 

257 

258 

259try: 

260 import jedi 

261 jedi.settings.case_insensitive_completion = False 

262 import jedi.api.helpers 

263 import jedi.api.classes 

264 JEDI_INSTALLED = True 

265except ImportError: 

266 JEDI_INSTALLED = False 

267 

268 

269# ----------------------------------------------------------------------------- 

270# Globals 

271#----------------------------------------------------------------------------- 

272 

273# ranges where we have most of the valid unicode names. We could be more finer 

274# grained but is it worth it for performance While unicode have character in the 

275# range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I 

276# write this). With below range we cover them all, with a density of ~67% 

277# biggest next gap we consider only adds up about 1% density and there are 600 

278# gaps that would need hard coding. 

279_UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)] 

280 

281# Public API 

282__all__ = ["Completer", "IPCompleter"] 

283 

284if sys.platform == 'win32': 

285 PROTECTABLES = ' ' 

286else: 

287 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&' 

288 

289# Protect against returning an enormous number of completions which the frontend 

290# may have trouble processing. 

291MATCHES_LIMIT = 500 

292 

293# Completion type reported when no type can be inferred. 

294_UNKNOWN_TYPE = "<unknown>" 

295 

296# sentinel value to signal lack of a match 

297not_found = object() 

298 

299class ProvisionalCompleterWarning(FutureWarning): 

300 """ 

301 Exception raise by an experimental feature in this module. 

302 

303 Wrap code in :any:`provisionalcompleter` context manager if you 

304 are certain you want to use an unstable feature. 

305 """ 

306 pass 

307 

308warnings.filterwarnings('error', category=ProvisionalCompleterWarning) 

309 

310 

311@skip_doctest 

312@contextmanager 

313def provisionalcompleter(action='ignore'): 

314 """ 

315 This context manager has to be used in any place where unstable completer 

316 behavior and API may be called. 

317 

318 >>> with provisionalcompleter(): 

319 ... completer.do_experimental_things() # works 

320 

321 >>> completer.do_experimental_things() # raises. 

322 

323 .. note:: 

324 

325 Unstable 

326 

327 By using this context manager you agree that the API in use may change 

328 without warning, and that you won't complain if they do so. 

329 

330 You also understand that, if the API is not to your liking, you should report 

331 a bug to explain your use case upstream. 

332 

333 We'll be happy to get your feedback, feature requests, and improvements on 

334 any of the unstable APIs! 

335 """ 

336 with warnings.catch_warnings(): 

337 warnings.filterwarnings(action, category=ProvisionalCompleterWarning) 

338 yield 

339 

340 

341def has_open_quotes(s: str) -> Union[str, bool]: 

342 """Return whether a string has open quotes. 

343 

344 This simply counts whether the number of quote characters of either type in 

345 the string is odd. 

346 

347 Returns 

348 ------- 

349 If there is an open quote, the quote character is returned. Else, return 

350 False. 

351 """ 

352 # We check " first, then ', so complex cases with nested quotes will get 

353 # the " to take precedence. 

354 if s.count('"') % 2: 

355 return '"' 

356 elif s.count("'") % 2: 

357 return "'" 

358 else: 

359 return False 

360 

361 

362def protect_filename(s: str, protectables: str = PROTECTABLES) -> str: 

363 """Escape a string to protect certain characters.""" 

364 if set(s) & set(protectables): 

365 if sys.platform == "win32": 

366 return '"' + s + '"' 

367 else: 

368 return "".join(("\\" + c if c in protectables else c) for c in s) 

369 else: 

370 return s 

371 

372 

373def expand_user(path: str) -> tuple[str, bool, str]: 

374 """Expand ``~``-style usernames in strings. 

375 

376 This is similar to :func:`os.path.expanduser`, but it computes and returns 

377 extra information that will be useful if the input was being used in 

378 computing completions, and you wish to return the completions with the 

379 original '~' instead of its expanded value. 

380 

381 Parameters 

382 ---------- 

383 path : str 

384 String to be expanded. If no ~ is present, the output is the same as the 

385 input. 

386 

387 Returns 

388 ------- 

389 newpath : str 

390 Result of ~ expansion in the input path. 

391 tilde_expand : bool 

392 Whether any expansion was performed or not. 

393 tilde_val : str 

394 The value that ~ was replaced with. 

395 """ 

396 # Default values 

397 tilde_expand = False 

398 tilde_val = '' 

399 newpath = path 

400 

401 if path.startswith('~'): 

402 tilde_expand = True 

403 rest = len(path)-1 

404 newpath = os.path.expanduser(path) 

405 if rest: 

406 tilde_val = newpath[:-rest] 

407 else: 

408 tilde_val = newpath 

409 

410 return newpath, tilde_expand, tilde_val 

411 

412 

413def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str: 

414 """Does the opposite of expand_user, with its outputs. 

415 """ 

416 if tilde_expand: 

417 return path.replace(tilde_val, '~') 

418 else: 

419 return path 

420 

421 

422def completions_sorting_key(word): 

423 """key for sorting completions 

424 

425 This does several things: 

426 

427 - Demote any completions starting with underscores to the end 

428 - Insert any %magic and %%cellmagic completions in the alphabetical order 

429 by their name 

430 """ 

431 prio1, prio2 = 0, 0 

432 

433 if word.startswith('__'): 

434 prio1 = 2 

435 elif word.startswith('_'): 

436 prio1 = 1 

437 

438 if word.endswith('='): 

439 prio1 = -1 

440 

441 if word.startswith('%%'): 

442 # If there's another % in there, this is something else, so leave it alone 

443 if "%" not in word[2:]: 

444 word = word[2:] 

445 prio2 = 2 

446 elif word.startswith('%'): 

447 if "%" not in word[1:]: 

448 word = word[1:] 

449 prio2 = 1 

450 

451 return prio1, word, prio2 

452 

453 

454class _FakeJediCompletion: 

455 """ 

456 This is a workaround to communicate to the UI that Jedi has crashed and to 

457 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true. 

458 

459 Added in IPython 6.0 so should likely be removed for 7.0 

460 

461 """ 

462 

463 def __init__(self, name): 

464 

465 self.name = name 

466 self.complete = name 

467 self.type = 'crashed' 

468 self.name_with_symbols = name 

469 self.signature = "" 

470 self._origin = "fake" 

471 self.text = "crashed" 

472 

473 def __repr__(self): 

474 return '<Fake completion object jedi has crashed>' 

475 

476 

477_JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion] 

478 

479 

480class Completion: 

481 """ 

482 Completion object used and returned by IPython completers. 

483 

484 .. warning:: 

485 

486 Unstable 

487 

488 This function is unstable, API may change without warning. 

489 It will also raise unless use in proper context manager. 

490 

491 This act as a middle ground :any:`Completion` object between the 

492 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion 

493 object. While Jedi need a lot of information about evaluator and how the 

494 code should be ran/inspected, PromptToolkit (and other frontend) mostly 

495 need user facing information. 

496 

497 - Which range should be replaced replaced by what. 

498 - Some metadata (like completion type), or meta information to displayed to 

499 the use user. 

500 

501 For debugging purpose we can also store the origin of the completion (``jedi``, 

502 ``IPython.python_matches``, ``IPython.magics_matches``...). 

503 """ 

504 

505 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin'] 

506 

507 def __init__( 

508 self, 

509 start: int, 

510 end: int, 

511 text: str, 

512 *, 

513 type: Optional[str] = None, 

514 _origin="", 

515 signature="", 

516 ) -> None: 

517 warnings.warn( 

518 "``Completion`` is a provisional API (as of IPython 6.0). " 

519 "It may change without warnings. " 

520 "Use in corresponding context manager.", 

521 category=ProvisionalCompleterWarning, 

522 stacklevel=2, 

523 ) 

524 

525 self.start = start 

526 self.end = end 

527 self.text = text 

528 self.type = type 

529 self.signature = signature 

530 self._origin = _origin 

531 

532 def __repr__(self): 

533 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \ 

534 (self.start, self.end, self.text, self.type or '?', self.signature or '?') 

535 

536 def __eq__(self, other) -> bool: 

537 """ 

538 Equality and hash do not hash the type (as some completer may not be 

539 able to infer the type), but are use to (partially) de-duplicate 

540 completion. 

541 

542 Completely de-duplicating completion is a bit tricker that just 

543 comparing as it depends on surrounding text, which Completions are not 

544 aware of. 

545 """ 

546 return self.start == other.start and \ 

547 self.end == other.end and \ 

548 self.text == other.text 

549 

550 def __hash__(self): 

551 return hash((self.start, self.end, self.text)) 

552 

553 

554class SimpleCompletion: 

555 """Completion item to be included in the dictionary returned by new-style Matcher (API v2). 

556 

557 .. warning:: 

558 

559 Provisional 

560 

561 This class is used to describe the currently supported attributes of 

562 simple completion items, and any additional implementation details 

563 should not be relied on. Additional attributes may be included in 

564 future versions, and meaning of text disambiguated from the current 

565 dual meaning of "text to insert" and "text to used as a label". 

566 """ 

567 

568 __slots__ = ["text", "type"] 

569 

570 def __init__(self, text: str, *, type: Optional[str] = None): 

571 self.text = text 

572 self.type = type 

573 

574 def __repr__(self): 

575 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>" 

576 

577 

578class _MatcherResultBase(TypedDict): 

579 """Definition of dictionary to be returned by new-style Matcher (API v2).""" 

580 

581 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token. 

582 matched_fragment: NotRequired[str] 

583 

584 #: Whether to suppress results from all other matchers (True), some 

585 #: matchers (set of identifiers) or none (False); default is False. 

586 suppress: NotRequired[Union[bool, set[str]]] 

587 

588 #: Identifiers of matchers which should NOT be suppressed when this matcher 

589 #: requests to suppress all other matchers; defaults to an empty set. 

590 do_not_suppress: NotRequired[set[str]] 

591 

592 #: Are completions already ordered and should be left as-is? default is False. 

593 ordered: NotRequired[bool] 

594 

595 

596@sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"]) 

597class SimpleMatcherResult(_MatcherResultBase, TypedDict): 

598 """Result of new-style completion matcher.""" 

599 

600 # note: TypedDict is added again to the inheritance chain 

601 # in order to get __orig_bases__ for documentation 

602 

603 #: List of candidate completions 

604 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion] 

605 

606 

607class _JediMatcherResult(_MatcherResultBase): 

608 """Matching result returned by Jedi (will be processed differently)""" 

609 

610 #: list of candidate completions 

611 completions: Iterator[_JediCompletionLike] 

612 

613 

614AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion] 

615AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion) 

616 

617 

618@dataclass 

619class CompletionContext: 

620 """Completion context provided as an argument to matchers in the Matcher API v2.""" 

621 

622 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`) 

623 # which was not explicitly visible as an argument of the matcher, making any refactor 

624 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers 

625 # from the completer, and make substituting them in sub-classes easier. 

626 

627 #: Relevant fragment of code directly preceding the cursor. 

628 #: The extraction of token is implemented via splitter heuristic 

629 #: (following readline behaviour for legacy reasons), which is user configurable 

630 #: (by switching the greedy mode). 

631 token: str 

632 

633 #: The full available content of the editor or buffer 

634 full_text: str 

635 

636 #: Cursor position in the line (the same for ``full_text`` and ``text``). 

637 cursor_position: int 

638 

639 #: Cursor line in ``full_text``. 

640 cursor_line: int 

641 

642 #: The maximum number of completions that will be used downstream. 

643 #: Matchers can use this information to abort early. 

644 #: The built-in Jedi matcher is currently excepted from this limit. 

645 # If not given, return all possible completions. 

646 limit: Optional[int] 

647 

648 @cached_property 

649 def text_until_cursor(self) -> str: 

650 return self.line_with_cursor[: self.cursor_position] 

651 

652 @cached_property 

653 def line_with_cursor(self) -> str: 

654 return self.full_text.split("\n")[self.cursor_line] 

655 

656 

657#: Matcher results for API v2. 

658MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult] 

659 

660 

661class _MatcherAPIv1Base(Protocol): 

662 def __call__(self, text: str) -> list[str]: 

663 """Call signature.""" 

664 ... 

665 

666 #: Used to construct the default matcher identifier 

667 __qualname__: str 

668 

669 

670class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol): 

671 #: API version 

672 matcher_api_version: Optional[Literal[1]] 

673 

674 def __call__(self, text: str) -> list[str]: 

675 """Call signature.""" 

676 ... 

677 

678 

679#: Protocol describing Matcher API v1. 

680MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total] 

681 

682 

683class MatcherAPIv2(Protocol): 

684 """Protocol describing Matcher API v2.""" 

685 

686 #: API version 

687 matcher_api_version: Literal[2] = 2 

688 

689 def __call__(self, context: CompletionContext) -> MatcherResult: 

690 """Call signature.""" 

691 ... 

692 

693 #: Used to construct the default matcher identifier 

694 __qualname__: str 

695 

696 

697Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2] 

698 

699 

700def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]: 

701 api_version = _get_matcher_api_version(matcher) 

702 return api_version == 1 

703 

704 

705def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]: 

706 api_version = _get_matcher_api_version(matcher) 

707 return api_version == 2 

708 

709 

710def _is_sizable(value: Any) -> TypeGuard[Sized]: 

711 """Determines whether objects is sizable""" 

712 return hasattr(value, "__len__") 

713 

714 

715def _is_iterator(value: Any) -> TypeGuard[Iterator]: 

716 """Determines whether objects is sizable""" 

717 return hasattr(value, "__next__") 

718 

719 

720def has_any_completions(result: MatcherResult) -> bool: 

721 """Check if any result includes any completions.""" 

722 completions = result["completions"] 

723 if _is_sizable(completions): 

724 return len(completions) != 0 

725 if _is_iterator(completions): 

726 try: 

727 old_iterator = completions 

728 first = next(old_iterator) 

729 result["completions"] = cast( 

730 Iterator[SimpleCompletion], 

731 itertools.chain([first], old_iterator), 

732 ) 

733 return True 

734 except StopIteration: 

735 return False 

736 raise ValueError( 

737 "Completions returned by matcher need to be an Iterator or a Sizable" 

738 ) 

739 

740 

741def completion_matcher( 

742 *, 

743 priority: Optional[float] = None, 

744 identifier: Optional[str] = None, 

745 api_version: int = 1, 

746) -> Callable[[Matcher], Matcher]: 

747 """Adds attributes describing the matcher. 

748 

749 Parameters 

750 ---------- 

751 priority : Optional[float] 

752 The priority of the matcher, determines the order of execution of matchers. 

753 Higher priority means that the matcher will be executed first. Defaults to 0. 

754 identifier : Optional[str] 

755 identifier of the matcher allowing users to modify the behaviour via traitlets, 

756 and also used to for debugging (will be passed as ``origin`` with the completions). 

757 

758 Defaults to matcher function's ``__qualname__`` (for example, 

759 ``IPCompleter.file_matcher`` for the built-in matched defined 

760 as a ``file_matcher`` method of the ``IPCompleter`` class). 

761 api_version: Optional[int] 

762 version of the Matcher API used by this matcher. 

763 Currently supported values are 1 and 2. 

764 Defaults to 1. 

765 """ 

766 

767 def wrapper(func: Matcher): 

768 func.matcher_priority = priority or 0 # type: ignore 

769 func.matcher_identifier = identifier or func.__qualname__ # type: ignore 

770 func.matcher_api_version = api_version # type: ignore 

771 if TYPE_CHECKING: 

772 if api_version == 1: 

773 func = cast(MatcherAPIv1, func) 

774 elif api_version == 2: 

775 func = cast(MatcherAPIv2, func) 

776 return func 

777 

778 return wrapper 

779 

780 

781def _get_matcher_priority(matcher: Matcher): 

782 return getattr(matcher, "matcher_priority", 0) 

783 

784 

785def _get_matcher_id(matcher: Matcher): 

786 return getattr(matcher, "matcher_identifier", matcher.__qualname__) 

787 

788 

789def _get_matcher_api_version(matcher): 

790 return getattr(matcher, "matcher_api_version", 1) 

791 

792 

793context_matcher = partial(completion_matcher, api_version=2) 

794 

795 

796_IC = Iterable[Completion] 

797 

798 

799def _deduplicate_completions(text: str, completions: _IC)-> _IC: 

800 """ 

801 Deduplicate a set of completions. 

802 

803 .. warning:: 

804 

805 Unstable 

806 

807 This function is unstable, API may change without warning. 

808 

809 Parameters 

810 ---------- 

811 text : str 

812 text that should be completed. 

813 completions : Iterator[Completion] 

814 iterator over the completions to deduplicate 

815 

816 Yields 

817 ------ 

818 `Completions` objects 

819 Completions coming from multiple sources, may be different but end up having 

820 the same effect when applied to ``text``. If this is the case, this will 

821 consider completions as equal and only emit the first encountered. 

822 Not folded in `completions()` yet for debugging purpose, and to detect when 

823 the IPython completer does return things that Jedi does not, but should be 

824 at some point. 

825 """ 

826 completions = list(completions) 

827 if not completions: 

828 return 

829 

830 new_start = min(c.start for c in completions) 

831 new_end = max(c.end for c in completions) 

832 

833 seen = set() 

834 for c in completions: 

835 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

836 if new_text not in seen: 

837 yield c 

838 seen.add(new_text) 

839 

840 

841def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC: 

842 """ 

843 Rectify a set of completions to all have the same ``start`` and ``end`` 

844 

845 .. warning:: 

846 

847 Unstable 

848 

849 This function is unstable, API may change without warning. 

850 It will also raise unless use in proper context manager. 

851 

852 Parameters 

853 ---------- 

854 text : str 

855 text that should be completed. 

856 completions : Iterator[Completion] 

857 iterator over the completions to rectify 

858 _debug : bool 

859 Log failed completion 

860 

861 Notes 

862 ----- 

863 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though 

864 the Jupyter Protocol requires them to behave like so. This will readjust 

865 the completion to have the same ``start`` and ``end`` by padding both 

866 extremities with surrounding text. 

867 

868 During stabilisation should support a ``_debug`` option to log which 

869 completion are return by the IPython completer and not found in Jedi in 

870 order to make upstream bug report. 

871 """ 

872 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). " 

873 "It may change without warnings. " 

874 "Use in corresponding context manager.", 

875 category=ProvisionalCompleterWarning, stacklevel=2) 

876 

877 completions = list(completions) 

878 if not completions: 

879 return 

880 starts = (c.start for c in completions) 

881 ends = (c.end for c in completions) 

882 

883 new_start = min(starts) 

884 new_end = max(ends) 

885 

886 seen_jedi = set() 

887 seen_python_matches = set() 

888 for c in completions: 

889 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

890 if c._origin == 'jedi': 

891 seen_jedi.add(new_text) 

892 elif c._origin == "IPCompleter.python_matcher": 

893 seen_python_matches.add(new_text) 

894 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature) 

895 diff = seen_python_matches.difference(seen_jedi) 

896 if diff and _debug: 

897 print('IPython.python matches have extras:', diff) 

898 

899 

900if sys.platform == 'win32': 

901 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?' 

902else: 

903 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?' 

904 

905GREEDY_DELIMS = ' =\r\n' 

906 

907 

908class CompletionSplitter: 

909 """An object to split an input line in a manner similar to readline. 

910 

911 By having our own implementation, we can expose readline-like completion in 

912 a uniform manner to all frontends. This object only needs to be given the 

913 line of text to be split and the cursor position on said line, and it 

914 returns the 'word' to be completed on at the cursor after splitting the 

915 entire line. 

916 

917 What characters are used as splitting delimiters can be controlled by 

918 setting the ``delims`` attribute (this is a property that internally 

919 automatically builds the necessary regular expression)""" 

920 

921 # Private interface 

922 

923 # A string of delimiter characters. The default value makes sense for 

924 # IPython's most typical usage patterns. 

925 _delims = DELIMS 

926 

927 # The expression (a normal string) to be compiled into a regular expression 

928 # for actual splitting. We store it as an attribute mostly for ease of 

929 # debugging, since this type of code can be so tricky to debug. 

930 _delim_expr = None 

931 

932 # The regular expression that does the actual splitting 

933 _delim_re = None 

934 

935 def __init__(self, delims=None): 

936 delims = CompletionSplitter._delims if delims is None else delims 

937 self.delims = delims 

938 

939 @property 

940 def delims(self): 

941 """Return the string of delimiter characters.""" 

942 return self._delims 

943 

944 @delims.setter 

945 def delims(self, delims): 

946 """Set the delimiters for line splitting.""" 

947 expr = '[' + ''.join('\\'+ c for c in delims) + ']' 

948 self._delim_re = re.compile(expr) 

949 self._delims = delims 

950 self._delim_expr = expr 

951 

952 def split_line(self, line, cursor_pos=None): 

953 """Split a line of text with a cursor at the given position. 

954 """ 

955 cut_line = line if cursor_pos is None else line[:cursor_pos] 

956 return self._delim_re.split(cut_line)[-1] 

957 

958 

959class Completer(Configurable): 

960 

961 greedy = Bool( 

962 False, 

963 help="""Activate greedy completion. 

964 

965 .. deprecated:: 8.8 

966 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead. 

967 

968 When enabled in IPython 8.8 or newer, changes configuration as follows: 

969 

970 - ``Completer.evaluation = 'unsafe'`` 

971 - ``Completer.auto_close_dict_keys = True`` 

972 """, 

973 ).tag(config=True) 

974 

975 evaluation = Enum( 

976 ("forbidden", "minimal", "limited", "unsafe", "dangerous"), 

977 default_value="limited", 

978 help="""Policy for code evaluation under completion. 

979 

980 Successive options allow to enable more eager evaluation for better 

981 completion suggestions, including for nested dictionaries, nested lists, 

982 or even results of function calls. 

983 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user 

984 code on :kbd:`Tab` with potentially unwanted or dangerous side effects. 

985 

986 Allowed values are: 

987 

988 - ``forbidden``: no evaluation of code is permitted, 

989 - ``minimal``: evaluation of literals and access to built-in namespace; 

990 no item/attribute evaluation, no access to locals/globals, 

991 no evaluation of any operations or comparisons. 

992 - ``limited``: access to all namespaces, evaluation of hard-coded methods 

993 (for example: :any:`dict.keys`, :any:`object.__getattr__`, 

994 :any:`object.__getitem__`) on allow-listed objects (for example: 

995 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``), 

996 - ``unsafe``: evaluation of all methods and function calls but not of 

997 syntax with side-effects like `del x`, 

998 - ``dangerous``: completely arbitrary evaluation; does not support auto-import. 

999 

1000 To override specific elements of the policy, you can use ``policy_overrides`` trait. 

1001 """, 

1002 ).tag(config=True) 

1003 

1004 use_jedi = Bool(default_value=JEDI_INSTALLED, 

1005 help="Experimental: Use Jedi to generate autocompletions. " 

1006 "Default to True if jedi is installed.").tag(config=True) 

1007 

1008 jedi_compute_type_timeout = Int(default_value=400, 

1009 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types. 

1010 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt 

1011 performance by preventing jedi to build its cache. 

1012 """).tag(config=True) 

1013 

1014 debug = Bool(default_value=False, 

1015 help='Enable debug for the Completer. Mostly print extra ' 

1016 'information for experimental jedi integration.')\ 

1017 .tag(config=True) 

1018 

1019 backslash_combining_completions = Bool(True, 

1020 help="Enable unicode completions, e.g. \\alpha<tab> . " 

1021 "Includes completion of latex commands, unicode names, and expanding " 

1022 "unicode characters back to latex commands.").tag(config=True) 

1023 

1024 auto_close_dict_keys = Bool( 

1025 False, 

1026 help=""" 

1027 Enable auto-closing dictionary keys. 

1028 

1029 When enabled string keys will be suffixed with a final quote 

1030 (matching the opening quote), tuple keys will also receive a 

1031 separating comma if needed, and keys which are final will 

1032 receive a closing bracket (``]``). 

1033 """, 

1034 ).tag(config=True) 

1035 

1036 policy_overrides = DictTrait( 

1037 default_value={}, 

1038 key_trait=Unicode(), 

1039 help="""Overrides for policy evaluation. 

1040 

1041 For example, to enable auto-import on completion specify: 

1042 

1043 .. code-block:: 

1044 

1045 ipython --Completer.policy_overrides='{"allow_auto_import": True}' --Completer.use_jedi=False 

1046 

1047 """, 

1048 ).tag(config=True) 

1049 

1050 @observe("evaluation") 

1051 def _evaluation_changed(self, _change): 

1052 _validate_policy_overrides( 

1053 policy_name=self.evaluation, policy_overrides=self.policy_overrides 

1054 ) 

1055 

1056 @observe("policy_overrides") 

1057 def _policy_overrides_changed(self, _change): 

1058 _validate_policy_overrides( 

1059 policy_name=self.evaluation, policy_overrides=self.policy_overrides 

1060 ) 

1061 

1062 auto_import_method = DottedObjectName( 

1063 default_value="importlib.import_module", 

1064 allow_none=True, 

1065 help="""\ 

1066 Provisional: 

1067 This is a provisional API in IPython 9.3, it may change without warnings. 

1068 

1069 A fully qualified path to an auto-import method for use by completer. 

1070 The function should take a single string and return `ModuleType` and 

1071 can raise `ImportError` exception if module is not found. 

1072 

1073 The default auto-import implementation does not populate the user namespace with the imported module. 

1074 """, 

1075 ).tag(config=True) 

1076 

1077 def __init__(self, namespace=None, global_namespace=None, **kwargs): 

1078 """Create a new completer for the command line. 

1079 

1080 Completer(namespace=ns, global_namespace=ns2) -> completer instance. 

1081 

1082 If unspecified, the default namespace where completions are performed 

1083 is __main__ (technically, __main__.__dict__). Namespaces should be 

1084 given as dictionaries. 

1085 

1086 An optional second namespace can be given. This allows the completer 

1087 to handle cases where both the local and global scopes need to be 

1088 distinguished. 

1089 """ 

1090 

1091 # Don't bind to namespace quite yet, but flag whether the user wants a 

1092 # specific namespace or to use __main__.__dict__. This will allow us 

1093 # to bind to __main__.__dict__ at completion time, not now. 

1094 if namespace is None: 

1095 self.use_main_ns = True 

1096 else: 

1097 self.use_main_ns = False 

1098 self.namespace = namespace 

1099 

1100 # The global namespace, if given, can be bound directly 

1101 if global_namespace is None: 

1102 self.global_namespace = {} 

1103 else: 

1104 self.global_namespace = global_namespace 

1105 

1106 self.custom_matchers = [] 

1107 

1108 super(Completer, self).__init__(**kwargs) 

1109 

1110 def complete(self, text, state): 

1111 """Return the next possible completion for 'text'. 

1112 

1113 This is called successively with state == 0, 1, 2, ... until it 

1114 returns None. The completion should begin with 'text'. 

1115 

1116 """ 

1117 if self.use_main_ns: 

1118 self.namespace = __main__.__dict__ 

1119 

1120 if state == 0: 

1121 if "." in text: 

1122 self.matches = self.attr_matches(text) 

1123 else: 

1124 self.matches = self.global_matches(text) 

1125 try: 

1126 return self.matches[state] 

1127 except IndexError: 

1128 return None 

1129 

1130 def global_matches(self, text): 

1131 """Compute matches when text is a simple name. 

1132 

1133 Return a list of all keywords, built-in functions and names currently 

1134 defined in self.namespace or self.global_namespace that match. 

1135 

1136 """ 

1137 matches = [] 

1138 match_append = matches.append 

1139 n = len(text) 

1140 for lst in [ 

1141 keyword.kwlist, 

1142 builtin_mod.__dict__.keys(), 

1143 list(self.namespace.keys()), 

1144 list(self.global_namespace.keys()), 

1145 ]: 

1146 for word in lst: 

1147 if word[:n] == text and word != "__builtins__": 

1148 match_append(word) 

1149 

1150 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z") 

1151 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]: 

1152 shortened = { 

1153 "_".join([sub[0] for sub in word.split("_")]): word 

1154 for word in lst 

1155 if snake_case_re.match(word) 

1156 } 

1157 for word in shortened.keys(): 

1158 if word[:n] == text and word != "__builtins__": 

1159 match_append(shortened[word]) 

1160 return matches 

1161 

1162 def attr_matches(self, text): 

1163 """Compute matches when text contains a dot. 

1164 

1165 Assuming the text is of the form NAME.NAME....[NAME], and is 

1166 evaluatable in self.namespace or self.global_namespace, it will be 

1167 evaluated and its attributes (as revealed by dir()) are used as 

1168 possible completions. (For class instances, class members are 

1169 also considered.) 

1170 

1171 WARNING: this can still invoke arbitrary C code, if an object 

1172 with a __getattr__ hook is evaluated. 

1173 

1174 """ 

1175 return self._attr_matches(text)[0] 

1176 

1177 # we simple attribute matching with normal identifiers. 

1178 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$") 

1179 

1180 def _strip_code_before_operator(self, code: str) -> str: 

1181 o_parens = {"(", "[", "{"} 

1182 c_parens = {")", "]", "}"} 

1183 

1184 # Dry-run tokenize to catch errors 

1185 try: 

1186 _ = list(tokenize.generate_tokens(iter(code.splitlines()).__next__)) 

1187 except tokenize.TokenError: 

1188 # Try trimming the expression and retrying 

1189 trimmed_code = self._trim_expr(code) 

1190 try: 

1191 _ = list( 

1192 tokenize.generate_tokens(iter(trimmed_code.splitlines()).__next__) 

1193 ) 

1194 code = trimmed_code 

1195 except tokenize.TokenError: 

1196 return code 

1197 

1198 tokens = _parse_tokens(code) 

1199 encountered_operator = False 

1200 after_operator = [] 

1201 nesting_level = 0 

1202 

1203 for t in tokens: 

1204 if t.type == tokenize.OP: 

1205 if t.string in o_parens: 

1206 nesting_level += 1 

1207 elif t.string in c_parens: 

1208 nesting_level -= 1 

1209 elif t.string != "." and nesting_level == 0: 

1210 encountered_operator = True 

1211 after_operator = [] 

1212 continue 

1213 

1214 if encountered_operator: 

1215 after_operator.append(t.string) 

1216 

1217 if encountered_operator: 

1218 return "".join(after_operator) 

1219 else: 

1220 return code 

1221 

1222 def _attr_matches( 

1223 self, text: str, include_prefix: bool = True 

1224 ) -> tuple[Sequence[str], str]: 

1225 m2 = self._ATTR_MATCH_RE.match(text) 

1226 if not m2: 

1227 return [], "" 

1228 expr, attr = m2.group(1, 2) 

1229 try: 

1230 expr = self._strip_code_before_operator(expr) 

1231 except tokenize.TokenError: 

1232 pass 

1233 

1234 obj = self._evaluate_expr(expr) 

1235 if obj is not_found: 

1236 return [], "" 

1237 

1238 if self.limit_to__all__ and hasattr(obj, '__all__'): 

1239 words = get__all__entries(obj) 

1240 else: 

1241 words = dir2(obj) 

1242 

1243 try: 

1244 words = generics.complete_object(obj, words) 

1245 except TryNext: 

1246 pass 

1247 except AssertionError: 

1248 raise 

1249 except Exception: 

1250 # Silence errors from completion function 

1251 pass 

1252 # Build match list to return 

1253 n = len(attr) 

1254 

1255 # Note: ideally we would just return words here and the prefix 

1256 # reconciliator would know that we intend to append to rather than 

1257 # replace the input text; this requires refactoring to return range 

1258 # which ought to be replaced (as does jedi). 

1259 if include_prefix: 

1260 tokens = _parse_tokens(expr) 

1261 rev_tokens = reversed(tokens) 

1262 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1263 name_turn = True 

1264 

1265 parts = [] 

1266 for token in rev_tokens: 

1267 if token.type in skip_over: 

1268 continue 

1269 if token.type == tokenize.NAME and name_turn: 

1270 parts.append(token.string) 

1271 name_turn = False 

1272 elif ( 

1273 token.type == tokenize.OP and token.string == "." and not name_turn 

1274 ): 

1275 parts.append(token.string) 

1276 name_turn = True 

1277 else: 

1278 # short-circuit if not empty nor name token 

1279 break 

1280 

1281 prefix_after_space = "".join(reversed(parts)) 

1282 else: 

1283 prefix_after_space = "" 

1284 

1285 return ( 

1286 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr], 

1287 "." + attr, 

1288 ) 

1289 

1290 def _trim_expr(self, code: str) -> str: 

1291 """ 

1292 Trim the code until it is a valid expression and not a tuple; 

1293 

1294 return the trimmed expression for guarded_eval. 

1295 """ 

1296 while code: 

1297 code = code[1:] 

1298 try: 

1299 res = ast.parse(code) 

1300 except SyntaxError: 

1301 continue 

1302 

1303 assert res is not None 

1304 if len(res.body) != 1: 

1305 continue 

1306 expr = res.body[0].value 

1307 if isinstance(expr, ast.Tuple) and not code[-1] == ")": 

1308 # we skip implicit tuple, like when trimming `fun(a,b`<completion> 

1309 # as `a,b` would be a tuple, and we actually expect to get only `b` 

1310 continue 

1311 return code 

1312 return "" 

1313 

1314 def _evaluate_expr(self, expr): 

1315 obj = not_found 

1316 done = False 

1317 while not done and expr: 

1318 try: 

1319 obj = guarded_eval( 

1320 expr, 

1321 EvaluationContext( 

1322 globals=self.global_namespace, 

1323 locals=self.namespace, 

1324 evaluation=self.evaluation, 

1325 auto_import=self._auto_import, 

1326 policy_overrides=self.policy_overrides, 

1327 ), 

1328 ) 

1329 done = True 

1330 except (SyntaxError, TypeError): 

1331 # TypeError can show up with something like `+ d` 

1332 # where `d` is a dictionary. 

1333 

1334 # trim the expression to remove any invalid prefix 

1335 # e.g. user starts `(d[`, so we get `expr = '(d'`, 

1336 # where parenthesis is not closed. 

1337 # TODO: make this faster by reusing parts of the computation? 

1338 expr = self._trim_expr(expr) 

1339 except Exception as e: 

1340 if self.debug: 

1341 print("Evaluation exception", e) 

1342 done = True 

1343 return obj 

1344 

1345 @property 

1346 def _auto_import(self): 

1347 if self.auto_import_method is None: 

1348 return None 

1349 if not hasattr(self, "_auto_import_func"): 

1350 self._auto_import_func = import_item(self.auto_import_method) 

1351 return self._auto_import_func 

1352 

1353 

1354def get__all__entries(obj): 

1355 """returns the strings in the __all__ attribute""" 

1356 try: 

1357 words = getattr(obj, '__all__') 

1358 except Exception: 

1359 return [] 

1360 

1361 return [w for w in words if isinstance(w, str)] 

1362 

1363 

1364class _DictKeyState(enum.Flag): 

1365 """Represent state of the key match in context of other possible matches. 

1366 

1367 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple. 

1368 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`. 

1369 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added. 

1370 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}` 

1371 """ 

1372 

1373 BASELINE = 0 

1374 END_OF_ITEM = enum.auto() 

1375 END_OF_TUPLE = enum.auto() 

1376 IN_TUPLE = enum.auto() 

1377 

1378 

1379def _parse_tokens(c): 

1380 """Parse tokens even if there is an error.""" 

1381 tokens = [] 

1382 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__) 

1383 while True: 

1384 try: 

1385 tokens.append(next(token_generator)) 

1386 except tokenize.TokenError: 

1387 return tokens 

1388 except StopIteration: 

1389 return tokens 

1390 

1391 

1392def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]: 

1393 """Match any valid Python numeric literal in a prefix of dictionary keys. 

1394 

1395 References: 

1396 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals 

1397 - https://docs.python.org/3/library/tokenize.html 

1398 """ 

1399 if prefix[-1].isspace(): 

1400 # if user typed a space we do not have anything to complete 

1401 # even if there was a valid number token before 

1402 return None 

1403 tokens = _parse_tokens(prefix) 

1404 rev_tokens = reversed(tokens) 

1405 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1406 number = None 

1407 for token in rev_tokens: 

1408 if token.type in skip_over: 

1409 continue 

1410 if number is None: 

1411 if token.type == tokenize.NUMBER: 

1412 number = token.string 

1413 continue 

1414 else: 

1415 # we did not match a number 

1416 return None 

1417 if token.type == tokenize.OP: 

1418 if token.string == ",": 

1419 break 

1420 if token.string in {"+", "-"}: 

1421 number = token.string + number 

1422 else: 

1423 return None 

1424 return number 

1425 

1426 

1427_INT_FORMATS = { 

1428 "0b": bin, 

1429 "0o": oct, 

1430 "0x": hex, 

1431} 

1432 

1433 

1434def match_dict_keys( 

1435 keys: list[Union[str, bytes, tuple[Union[str, bytes], ...]]], 

1436 prefix: str, 

1437 delims: str, 

1438 extra_prefix: Optional[tuple[Union[str, bytes], ...]] = None, 

1439) -> tuple[str, int, dict[str, _DictKeyState]]: 

1440 """Used by dict_key_matches, matching the prefix to a list of keys 

1441 

1442 Parameters 

1443 ---------- 

1444 keys 

1445 list of keys in dictionary currently being completed. 

1446 prefix 

1447 Part of the text already typed by the user. E.g. `mydict[b'fo` 

1448 delims 

1449 String of delimiters to consider when finding the current key. 

1450 extra_prefix : optional 

1451 Part of the text already typed in multi-key index cases. E.g. for 

1452 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`. 

1453 

1454 Returns 

1455 ------- 

1456 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with 

1457 ``quote`` being the quote that need to be used to close current string. 

1458 ``token_start`` the position where the replacement should start occurring, 

1459 ``matches`` a dictionary of replacement/completion keys on keys and values 

1460 indicating whether the state. 

1461 """ 

1462 prefix_tuple = extra_prefix if extra_prefix else () 

1463 

1464 prefix_tuple_size = sum( 

1465 [ 

1466 # for pandas, do not count slices as taking space 

1467 not isinstance(k, slice) 

1468 for k in prefix_tuple 

1469 ] 

1470 ) 

1471 text_serializable_types = (str, bytes, int, float, slice) 

1472 

1473 def filter_prefix_tuple(key): 

1474 # Reject too short keys 

1475 if len(key) <= prefix_tuple_size: 

1476 return False 

1477 # Reject keys which cannot be serialised to text 

1478 for k in key: 

1479 if not isinstance(k, text_serializable_types): 

1480 return False 

1481 # Reject keys that do not match the prefix 

1482 for k, pt in zip(key, prefix_tuple): 

1483 if k != pt and not isinstance(pt, slice): 

1484 return False 

1485 # All checks passed! 

1486 return True 

1487 

1488 filtered_key_is_final: dict[Union[str, bytes, int, float], _DictKeyState] = ( 

1489 defaultdict(lambda: _DictKeyState.BASELINE) 

1490 ) 

1491 

1492 for k in keys: 

1493 # If at least one of the matches is not final, mark as undetermined. 

1494 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where 

1495 # `111` appears final on first match but is not final on the second. 

1496 

1497 if isinstance(k, tuple): 

1498 if filter_prefix_tuple(k): 

1499 key_fragment = k[prefix_tuple_size] 

1500 filtered_key_is_final[key_fragment] |= ( 

1501 _DictKeyState.END_OF_TUPLE 

1502 if len(k) == prefix_tuple_size + 1 

1503 else _DictKeyState.IN_TUPLE 

1504 ) 

1505 elif prefix_tuple_size > 0: 

1506 # we are completing a tuple but this key is not a tuple, 

1507 # so we should ignore it 

1508 pass 

1509 else: 

1510 if isinstance(k, text_serializable_types): 

1511 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM 

1512 

1513 filtered_keys = filtered_key_is_final.keys() 

1514 

1515 if not prefix: 

1516 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()} 

1517 

1518 quote_match = re.search("(?:\"|')", prefix) 

1519 is_user_prefix_numeric = False 

1520 

1521 if quote_match: 

1522 quote = quote_match.group() 

1523 valid_prefix = prefix + quote 

1524 try: 

1525 prefix_str = literal_eval(valid_prefix) 

1526 except Exception: 

1527 return "", 0, {} 

1528 else: 

1529 # If it does not look like a string, let's assume 

1530 # we are dealing with a number or variable. 

1531 number_match = _match_number_in_dict_key_prefix(prefix) 

1532 

1533 # We do not want the key matcher to suggest variable names so we yield: 

1534 if number_match is None: 

1535 # The alternative would be to assume that user forgort the quote 

1536 # and if the substring matches, suggest adding it at the start. 

1537 return "", 0, {} 

1538 

1539 prefix_str = number_match 

1540 is_user_prefix_numeric = True 

1541 quote = "" 

1542 

1543 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$' 

1544 token_match = re.search(pattern, prefix, re.UNICODE) 

1545 assert token_match is not None # silence mypy 

1546 token_start = token_match.start() 

1547 token_prefix = token_match.group() 

1548 

1549 matched: dict[str, _DictKeyState] = {} 

1550 

1551 str_key: Union[str, bytes] 

1552 

1553 for key in filtered_keys: 

1554 if isinstance(key, (int, float)): 

1555 # User typed a number but this key is not a number. 

1556 if not is_user_prefix_numeric: 

1557 continue 

1558 str_key = str(key) 

1559 if isinstance(key, int): 

1560 int_base = prefix_str[:2].lower() 

1561 # if user typed integer using binary/oct/hex notation: 

1562 if int_base in _INT_FORMATS: 

1563 int_format = _INT_FORMATS[int_base] 

1564 str_key = int_format(key) 

1565 else: 

1566 # User typed a string but this key is a number. 

1567 if is_user_prefix_numeric: 

1568 continue 

1569 str_key = key 

1570 try: 

1571 if not str_key.startswith(prefix_str): 

1572 continue 

1573 except (AttributeError, TypeError, UnicodeError): 

1574 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa 

1575 continue 

1576 

1577 # reformat remainder of key to begin with prefix 

1578 rem = str_key[len(prefix_str) :] 

1579 # force repr wrapped in ' 

1580 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"') 

1581 rem_repr = rem_repr[1 + rem_repr.index("'"):-2] 

1582 if quote == '"': 

1583 # The entered prefix is quoted with ", 

1584 # but the match is quoted with '. 

1585 # A contained " hence needs escaping for comparison: 

1586 rem_repr = rem_repr.replace('"', '\\"') 

1587 

1588 # then reinsert prefix from start of token 

1589 match = "%s%s" % (token_prefix, rem_repr) 

1590 

1591 matched[match] = filtered_key_is_final[key] 

1592 return quote, token_start, matched 

1593 

1594 

1595def cursor_to_position(text:str, line:int, column:int)->int: 

1596 """ 

1597 Convert the (line,column) position of the cursor in text to an offset in a 

1598 string. 

1599 

1600 Parameters 

1601 ---------- 

1602 text : str 

1603 The text in which to calculate the cursor offset 

1604 line : int 

1605 Line of the cursor; 0-indexed 

1606 column : int 

1607 Column of the cursor 0-indexed 

1608 

1609 Returns 

1610 ------- 

1611 Position of the cursor in ``text``, 0-indexed. 

1612 

1613 See Also 

1614 -------- 

1615 position_to_cursor : reciprocal of this function 

1616 

1617 """ 

1618 lines = text.split('\n') 

1619 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines))) 

1620 

1621 return sum(len(line) + 1 for line in lines[:line]) + column 

1622 

1623 

1624def position_to_cursor(text: str, offset: int) -> tuple[int, int]: 

1625 """ 

1626 Convert the position of the cursor in text (0 indexed) to a line 

1627 number(0-indexed) and a column number (0-indexed) pair 

1628 

1629 Position should be a valid position in ``text``. 

1630 

1631 Parameters 

1632 ---------- 

1633 text : str 

1634 The text in which to calculate the cursor offset 

1635 offset : int 

1636 Position of the cursor in ``text``, 0-indexed. 

1637 

1638 Returns 

1639 ------- 

1640 (line, column) : (int, int) 

1641 Line of the cursor; 0-indexed, column of the cursor 0-indexed 

1642 

1643 See Also 

1644 -------- 

1645 cursor_to_position : reciprocal of this function 

1646 

1647 """ 

1648 

1649 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text)) 

1650 

1651 before = text[:offset] 

1652 blines = before.split('\n') # ! splitnes trim trailing \n 

1653 line = before.count('\n') 

1654 col = len(blines[-1]) 

1655 return line, col 

1656 

1657 

1658def _safe_isinstance(obj, module, class_name, *attrs): 

1659 """Checks if obj is an instance of module.class_name if loaded 

1660 """ 

1661 if module in sys.modules: 

1662 m = sys.modules[module] 

1663 for attr in [class_name, *attrs]: 

1664 m = getattr(m, attr) 

1665 return isinstance(obj, m) 

1666 

1667 

1668@context_matcher() 

1669def back_unicode_name_matcher(context: CompletionContext): 

1670 """Match Unicode characters back to Unicode name 

1671 

1672 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API. 

1673 """ 

1674 fragment, matches = back_unicode_name_matches(context.text_until_cursor) 

1675 return _convert_matcher_v1_result_to_v2( 

1676 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

1677 ) 

1678 

1679 

1680def back_unicode_name_matches(text: str) -> tuple[str, Sequence[str]]: 

1681 """Match Unicode characters back to Unicode name 

1682 

1683 This does ``☃`` -> ``\\snowman`` 

1684 

1685 Note that snowman is not a valid python3 combining character but will be expanded. 

1686 Though it will not recombine back to the snowman character by the completion machinery. 

1687 

1688 This will not either back-complete standard sequences like \\n, \\b ... 

1689 

1690 .. deprecated:: 8.6 

1691 You can use :meth:`back_unicode_name_matcher` instead. 

1692 

1693 Returns 

1694 ======= 

1695 

1696 Return a tuple with two elements: 

1697 

1698 - The Unicode character that was matched (preceded with a backslash), or 

1699 empty string, 

1700 - a sequence (of 1), name for the match Unicode character, preceded by 

1701 backslash, or empty if no match. 

1702 """ 

1703 if len(text)<2: 

1704 return '', () 

1705 maybe_slash = text[-2] 

1706 if maybe_slash != '\\': 

1707 return '', () 

1708 

1709 char = text[-1] 

1710 # no expand on quote for completion in strings. 

1711 # nor backcomplete standard ascii keys 

1712 if char in string.ascii_letters or char in ('"',"'"): 

1713 return '', () 

1714 try : 

1715 unic = unicodedata.name(char) 

1716 return '\\'+char,('\\'+unic,) 

1717 except KeyError: 

1718 pass 

1719 return '', () 

1720 

1721 

1722@context_matcher() 

1723def back_latex_name_matcher(context: CompletionContext) -> SimpleMatcherResult: 

1724 """Match latex characters back to unicode name 

1725 

1726 This does ``\\ℵ`` -> ``\\aleph`` 

1727 """ 

1728 

1729 text = context.text_until_cursor 

1730 no_match = { 

1731 "completions": [], 

1732 "suppress": False, 

1733 } 

1734 

1735 if len(text)<2: 

1736 return no_match 

1737 maybe_slash = text[-2] 

1738 if maybe_slash != '\\': 

1739 return no_match 

1740 

1741 char = text[-1] 

1742 # no expand on quote for completion in strings. 

1743 # nor backcomplete standard ascii keys 

1744 if char in string.ascii_letters or char in ('"',"'"): 

1745 return no_match 

1746 try : 

1747 latex = reverse_latex_symbol[char] 

1748 # '\\' replace the \ as well 

1749 return { 

1750 "completions": [SimpleCompletion(text=latex, type="latex")], 

1751 "suppress": True, 

1752 "matched_fragment": "\\" + char, 

1753 } 

1754 except KeyError: 

1755 pass 

1756 

1757 return no_match 

1758 

1759def _formatparamchildren(parameter) -> str: 

1760 """ 

1761 Get parameter name and value from Jedi Private API 

1762 

1763 Jedi does not expose a simple way to get `param=value` from its API. 

1764 

1765 Parameters 

1766 ---------- 

1767 parameter 

1768 Jedi's function `Param` 

1769 

1770 Returns 

1771 ------- 

1772 A string like 'a', 'b=1', '*args', '**kwargs' 

1773 

1774 """ 

1775 description = parameter.description 

1776 if not description.startswith('param '): 

1777 raise ValueError('Jedi function parameter description have change format.' 

1778 'Expected "param ...", found %r".' % description) 

1779 return description[6:] 

1780 

1781def _make_signature(completion)-> str: 

1782 """ 

1783 Make the signature from a jedi completion 

1784 

1785 Parameters 

1786 ---------- 

1787 completion : jedi.Completion 

1788 object does not complete a function type 

1789 

1790 Returns 

1791 ------- 

1792 a string consisting of the function signature, with the parenthesis but 

1793 without the function name. example: 

1794 `(a, *args, b=1, **kwargs)` 

1795 

1796 """ 

1797 

1798 # it looks like this might work on jedi 0.17 

1799 if hasattr(completion, 'get_signatures'): 

1800 signatures = completion.get_signatures() 

1801 if not signatures: 

1802 return '(?)' 

1803 

1804 c0 = completion.get_signatures()[0] 

1805 return '('+c0.to_string().split('(', maxsplit=1)[1] 

1806 

1807 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures() 

1808 for p in signature.defined_names()) if f]) 

1809 

1810 

1811_CompleteResult = dict[str, MatcherResult] 

1812 

1813 

1814DICT_MATCHER_REGEX = re.compile( 

1815 r"""(?x) 

1816( # match dict-referring - or any get item object - expression 

1817 .+ 

1818) 

1819\[ # open bracket 

1820\s* # and optional whitespace 

1821# Capture any number of serializable objects (e.g. "a", "b", 'c') 

1822# and slices 

1823((?:(?: 

1824 (?: # closed string 

1825 [uUbB]? # string prefix (r not handled) 

1826 (?: 

1827 '(?:[^']|(?<!\\)\\')*' 

1828 | 

1829 "(?:[^"]|(?<!\\)\\")*" 

1830 ) 

1831 ) 

1832 | 

1833 # capture integers and slices 

1834 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2} 

1835 | 

1836 # integer in bin/hex/oct notation 

1837 0[bBxXoO]_?(?:\w|\d)+ 

1838 ) 

1839 \s*,\s* 

1840)*) 

1841((?: 

1842 (?: # unclosed string 

1843 [uUbB]? # string prefix (r not handled) 

1844 (?: 

1845 '(?:[^']|(?<!\\)\\')* 

1846 | 

1847 "(?:[^"]|(?<!\\)\\")* 

1848 ) 

1849 ) 

1850 | 

1851 # unfinished integer 

1852 (?:[-+]?\d+) 

1853 | 

1854 # integer in bin/hex/oct notation 

1855 0[bBxXoO]_?(?:\w|\d)+ 

1856 ) 

1857)? 

1858$ 

1859""" 

1860) 

1861 

1862 

1863def _convert_matcher_v1_result_to_v2_no_no( 

1864 matches: Sequence[str], 

1865 type: str, 

1866) -> SimpleMatcherResult: 

1867 """same as _convert_matcher_v1_result_to_v2 but fragment=None, and suppress_if_matches is False by construction""" 

1868 return SimpleMatcherResult( 

1869 completions=[SimpleCompletion(text=match, type=type) for match in matches], 

1870 suppress=False, 

1871 ) 

1872 

1873 

1874def _convert_matcher_v1_result_to_v2( 

1875 matches: Sequence[str], 

1876 type: str, 

1877 fragment: Optional[str] = None, 

1878 suppress_if_matches: bool = False, 

1879) -> SimpleMatcherResult: 

1880 """Utility to help with transition""" 

1881 result = { 

1882 "completions": [SimpleCompletion(text=match, type=type) for match in matches], 

1883 "suppress": (True if matches else False) if suppress_if_matches else False, 

1884 } 

1885 if fragment is not None: 

1886 result["matched_fragment"] = fragment 

1887 return cast(SimpleMatcherResult, result) 

1888 

1889 

1890class IPCompleter(Completer): 

1891 """Extension of the completer class with IPython-specific features""" 

1892 

1893 @observe('greedy') 

1894 def _greedy_changed(self, change): 

1895 """update the splitter and readline delims when greedy is changed""" 

1896 if change["new"]: 

1897 self.evaluation = "unsafe" 

1898 self.auto_close_dict_keys = True 

1899 self.splitter.delims = GREEDY_DELIMS 

1900 else: 

1901 self.evaluation = "limited" 

1902 self.auto_close_dict_keys = False 

1903 self.splitter.delims = DELIMS 

1904 

1905 dict_keys_only = Bool( 

1906 False, 

1907 help=""" 

1908 Whether to show dict key matches only. 

1909 

1910 (disables all matchers except for `IPCompleter.dict_key_matcher`). 

1911 """, 

1912 ) 

1913 

1914 suppress_competing_matchers = UnionTrait( 

1915 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))], 

1916 default_value=None, 

1917 help=""" 

1918 Whether to suppress completions from other *Matchers*. 

1919 

1920 When set to ``None`` (default) the matchers will attempt to auto-detect 

1921 whether suppression of other matchers is desirable. For example, at 

1922 the beginning of a line followed by `%` we expect a magic completion 

1923 to be the only applicable option, and after ``my_dict['`` we usually 

1924 expect a completion with an existing dictionary key. 

1925 

1926 If you want to disable this heuristic and see completions from all matchers, 

1927 set ``IPCompleter.suppress_competing_matchers = False``. 

1928 To disable the heuristic for specific matchers provide a dictionary mapping: 

1929 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``. 

1930 

1931 Set ``IPCompleter.suppress_competing_matchers = True`` to limit 

1932 completions to the set of matchers with the highest priority; 

1933 this is equivalent to ``IPCompleter.merge_completions`` and 

1934 can be beneficial for performance, but will sometimes omit relevant 

1935 candidates from matchers further down the priority list. 

1936 """, 

1937 ).tag(config=True) 

1938 

1939 merge_completions = Bool( 

1940 True, 

1941 help="""Whether to merge completion results into a single list 

1942 

1943 If False, only the completion results from the first non-empty 

1944 completer will be returned. 

1945 

1946 As of version 8.6.0, setting the value to ``False`` is an alias for: 

1947 ``IPCompleter.suppress_competing_matchers = True.``. 

1948 """, 

1949 ).tag(config=True) 

1950 

1951 disable_matchers = ListTrait( 

1952 Unicode(), 

1953 help="""List of matchers to disable. 

1954 

1955 The list should contain matcher identifiers (see :any:`completion_matcher`). 

1956 """, 

1957 ).tag(config=True) 

1958 

1959 omit__names = Enum( 

1960 (0, 1, 2), 

1961 default_value=2, 

1962 help="""Instruct the completer to omit private method names 

1963 

1964 Specifically, when completing on ``object.<tab>``. 

1965 

1966 When 2 [default]: all names that start with '_' will be excluded. 

1967 

1968 When 1: all 'magic' names (``__foo__``) will be excluded. 

1969 

1970 When 0: nothing will be excluded. 

1971 """ 

1972 ).tag(config=True) 

1973 limit_to__all__ = Bool(False, 

1974 help=""" 

1975 DEPRECATED as of version 5.0. 

1976 

1977 Instruct the completer to use __all__ for the completion 

1978 

1979 Specifically, when completing on ``object.<tab>``. 

1980 

1981 When True: only those names in obj.__all__ will be included. 

1982 

1983 When False [default]: the __all__ attribute is ignored 

1984 """, 

1985 ).tag(config=True) 

1986 

1987 profile_completions = Bool( 

1988 default_value=False, 

1989 help="If True, emit profiling data for completion subsystem using cProfile." 

1990 ).tag(config=True) 

1991 

1992 profiler_output_dir = Unicode( 

1993 default_value=".completion_profiles", 

1994 help="Template for path at which to output profile data for completions." 

1995 ).tag(config=True) 

1996 

1997 @observe('limit_to__all__') 

1998 def _limit_to_all_changed(self, change): 

1999 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration ' 

2000 'value has been deprecated since IPython 5.0, will be made to have ' 

2001 'no effects and then removed in future version of IPython.', 

2002 UserWarning) 

2003 

2004 def __init__( 

2005 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs 

2006 ): 

2007 """IPCompleter() -> completer 

2008 

2009 Return a completer object. 

2010 

2011 Parameters 

2012 ---------- 

2013 shell 

2014 a pointer to the ipython shell itself. This is needed 

2015 because this completer knows about magic functions, and those can 

2016 only be accessed via the ipython instance. 

2017 namespace : dict, optional 

2018 an optional dict where completions are performed. 

2019 global_namespace : dict, optional 

2020 secondary optional dict for completions, to 

2021 handle cases (such as IPython embedded inside functions) where 

2022 both Python scopes are visible. 

2023 config : Config 

2024 traitlet's config object 

2025 **kwargs 

2026 passed to super class unmodified. 

2027 """ 

2028 

2029 self.magic_escape = ESC_MAGIC 

2030 self.splitter = CompletionSplitter() 

2031 

2032 # _greedy_changed() depends on splitter and readline being defined: 

2033 super().__init__( 

2034 namespace=namespace, 

2035 global_namespace=global_namespace, 

2036 config=config, 

2037 **kwargs, 

2038 ) 

2039 

2040 # List where completion matches will be stored 

2041 self.matches = [] 

2042 self.shell = shell 

2043 # Regexp to split filenames with spaces in them 

2044 self.space_name_re = re.compile(r'([^\\] )') 

2045 # Hold a local ref. to glob.glob for speed 

2046 self.glob = glob.glob 

2047 

2048 # Determine if we are running on 'dumb' terminals, like (X)Emacs 

2049 # buffers, to avoid completion problems. 

2050 term = os.environ.get('TERM','xterm') 

2051 self.dumb_terminal = term in ['dumb','emacs'] 

2052 

2053 # Special handling of backslashes needed in win32 platforms 

2054 if sys.platform == "win32": 

2055 self.clean_glob = self._clean_glob_win32 

2056 else: 

2057 self.clean_glob = self._clean_glob 

2058 

2059 #regexp to parse docstring for function signature 

2060 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2061 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2062 #use this if positional argument name is also needed 

2063 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)') 

2064 

2065 self.magic_arg_matchers = [ 

2066 self.magic_config_matcher, 

2067 self.magic_color_matcher, 

2068 ] 

2069 

2070 # This is set externally by InteractiveShell 

2071 self.custom_completers = None 

2072 

2073 # This is a list of names of unicode characters that can be completed 

2074 # into their corresponding unicode value. The list is large, so we 

2075 # lazily initialize it on first use. Consuming code should access this 

2076 # attribute through the `@unicode_names` property. 

2077 self._unicode_names = None 

2078 

2079 self._backslash_combining_matchers = [ 

2080 self.latex_name_matcher, 

2081 self.unicode_name_matcher, 

2082 back_latex_name_matcher, 

2083 back_unicode_name_matcher, 

2084 self.fwd_unicode_matcher, 

2085 ] 

2086 

2087 if not self.backslash_combining_completions: 

2088 for matcher in self._backslash_combining_matchers: 

2089 self.disable_matchers.append(_get_matcher_id(matcher)) 

2090 

2091 if not self.merge_completions: 

2092 self.suppress_competing_matchers = True 

2093 

2094 @property 

2095 def matchers(self) -> list[Matcher]: 

2096 """All active matcher routines for completion""" 

2097 if self.dict_keys_only: 

2098 return [self.dict_key_matcher] 

2099 

2100 if self.use_jedi: 

2101 return [ 

2102 *self.custom_matchers, 

2103 *self._backslash_combining_matchers, 

2104 *self.magic_arg_matchers, 

2105 self.custom_completer_matcher, 

2106 self.magic_matcher, 

2107 self._jedi_matcher, 

2108 self.dict_key_matcher, 

2109 self.file_matcher, 

2110 ] 

2111 else: 

2112 return [ 

2113 *self.custom_matchers, 

2114 *self._backslash_combining_matchers, 

2115 *self.magic_arg_matchers, 

2116 self.custom_completer_matcher, 

2117 self.dict_key_matcher, 

2118 self.magic_matcher, 

2119 self.python_matcher, 

2120 self.file_matcher, 

2121 self.python_func_kw_matcher, 

2122 ] 

2123 

2124 def all_completions(self, text: str) -> list[str]: 

2125 """ 

2126 Wrapper around the completion methods for the benefit of emacs. 

2127 """ 

2128 prefix = text.rpartition('.')[0] 

2129 with provisionalcompleter(): 

2130 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text 

2131 for c in self.completions(text, len(text))] 

2132 

2133 return self.complete(text)[1] 

2134 

2135 def _clean_glob(self, text:str): 

2136 return self.glob("%s*" % text) 

2137 

2138 def _clean_glob_win32(self, text:str): 

2139 return [f.replace("\\","/") 

2140 for f in self.glob("%s*" % text)] 

2141 

2142 @context_matcher() 

2143 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2144 """Match filenames, expanding ~USER type strings. 

2145 

2146 Most of the seemingly convoluted logic in this completer is an 

2147 attempt to handle filenames with spaces in them. And yet it's not 

2148 quite perfect, because Python's readline doesn't expose all of the 

2149 GNU readline details needed for this to be done correctly. 

2150 

2151 For a filename with a space in it, the printed completions will be 

2152 only the parts after what's already been typed (instead of the 

2153 full completions, as is normally done). I don't think with the 

2154 current (as of Python 2.3) Python readline it's possible to do 

2155 better. 

2156 """ 

2157 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter, 

2158 # starts with `/home/`, `C:\`, etc) 

2159 

2160 text = context.token 

2161 

2162 # chars that require escaping with backslash - i.e. chars 

2163 # that readline treats incorrectly as delimiters, but we 

2164 # don't want to treat as delimiters in filename matching 

2165 # when escaped with backslash 

2166 if text.startswith('!'): 

2167 text = text[1:] 

2168 text_prefix = u'!' 

2169 else: 

2170 text_prefix = u'' 

2171 

2172 text_until_cursor = self.text_until_cursor 

2173 # track strings with open quotes 

2174 open_quotes = has_open_quotes(text_until_cursor) 

2175 

2176 if '(' in text_until_cursor or '[' in text_until_cursor: 

2177 lsplit = text 

2178 else: 

2179 try: 

2180 # arg_split ~ shlex.split, but with unicode bugs fixed by us 

2181 lsplit = arg_split(text_until_cursor)[-1] 

2182 except ValueError: 

2183 # typically an unmatched ", or backslash without escaped char. 

2184 if open_quotes: 

2185 lsplit = text_until_cursor.split(open_quotes)[-1] 

2186 else: 

2187 return { 

2188 "completions": [], 

2189 "suppress": False, 

2190 } 

2191 except IndexError: 

2192 # tab pressed on empty line 

2193 lsplit = "" 

2194 

2195 if not open_quotes and lsplit != protect_filename(lsplit): 

2196 # if protectables are found, do matching on the whole escaped name 

2197 has_protectables = True 

2198 text0,text = text,lsplit 

2199 else: 

2200 has_protectables = False 

2201 text = os.path.expanduser(text) 

2202 

2203 if text == "": 

2204 return { 

2205 "completions": [ 

2206 SimpleCompletion( 

2207 text=text_prefix + protect_filename(f), type="path" 

2208 ) 

2209 for f in self.glob("*") 

2210 ], 

2211 "suppress": False, 

2212 } 

2213 

2214 # Compute the matches from the filesystem 

2215 if sys.platform == 'win32': 

2216 m0 = self.clean_glob(text) 

2217 else: 

2218 m0 = self.clean_glob(text.replace('\\', '')) 

2219 

2220 if has_protectables: 

2221 # If we had protectables, we need to revert our changes to the 

2222 # beginning of filename so that we don't double-write the part 

2223 # of the filename we have so far 

2224 len_lsplit = len(lsplit) 

2225 matches = [text_prefix + text0 + 

2226 protect_filename(f[len_lsplit:]) for f in m0] 

2227 else: 

2228 if open_quotes: 

2229 # if we have a string with an open quote, we don't need to 

2230 # protect the names beyond the quote (and we _shouldn't_, as 

2231 # it would cause bugs when the filesystem call is made). 

2232 matches = m0 if sys.platform == "win32" else\ 

2233 [protect_filename(f, open_quotes) for f in m0] 

2234 else: 

2235 matches = [text_prefix + 

2236 protect_filename(f) for f in m0] 

2237 

2238 # Mark directories in input list by appending '/' to their names. 

2239 return { 

2240 "completions": [ 

2241 SimpleCompletion(text=x + "/" if os.path.isdir(x) else x, type="path") 

2242 for x in matches 

2243 ], 

2244 "suppress": False, 

2245 } 

2246 

2247 @context_matcher() 

2248 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2249 """Match magics.""" 

2250 

2251 # Get all shell magics now rather than statically, so magics loaded at 

2252 # runtime show up too. 

2253 text = context.token 

2254 lsm = self.shell.magics_manager.lsmagic() 

2255 line_magics = lsm['line'] 

2256 cell_magics = lsm['cell'] 

2257 pre = self.magic_escape 

2258 pre2 = pre+pre 

2259 

2260 explicit_magic = text.startswith(pre) 

2261 

2262 # Completion logic: 

2263 # - user gives %%: only do cell magics 

2264 # - user gives %: do both line and cell magics 

2265 # - no prefix: do both 

2266 # In other words, line magics are skipped if the user gives %% explicitly 

2267 # 

2268 # We also exclude magics that match any currently visible names: 

2269 # https://github.com/ipython/ipython/issues/4877, unless the user has 

2270 # typed a %: 

2271 # https://github.com/ipython/ipython/issues/10754 

2272 bare_text = text.lstrip(pre) 

2273 global_matches = self.global_matches(bare_text) 

2274 if not explicit_magic: 

2275 def matches(magic): 

2276 """ 

2277 Filter magics, in particular remove magics that match 

2278 a name present in global namespace. 

2279 """ 

2280 return ( magic.startswith(bare_text) and 

2281 magic not in global_matches ) 

2282 else: 

2283 def matches(magic): 

2284 return magic.startswith(bare_text) 

2285 

2286 completions = [pre2 + m for m in cell_magics if matches(m)] 

2287 if not text.startswith(pre2): 

2288 completions += [pre + m for m in line_magics if matches(m)] 

2289 

2290 is_magic_prefix = len(text) > 0 and text[0] == "%" 

2291 

2292 return { 

2293 "completions": [ 

2294 SimpleCompletion(text=comp, type="magic") for comp in completions 

2295 ], 

2296 "suppress": is_magic_prefix and len(completions) > 0, 

2297 } 

2298 

2299 @context_matcher() 

2300 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2301 """Match class names and attributes for %config magic.""" 

2302 # NOTE: uses `line_buffer` equivalent for compatibility 

2303 matches = self.magic_config_matches(context.line_with_cursor) 

2304 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2305 

2306 def magic_config_matches(self, text: str) -> list[str]: 

2307 """Match class names and attributes for %config magic. 

2308 

2309 .. deprecated:: 8.6 

2310 You can use :meth:`magic_config_matcher` instead. 

2311 """ 

2312 texts = text.strip().split() 

2313 

2314 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'): 

2315 # get all configuration classes 

2316 classes = sorted(set([ c for c in self.shell.configurables 

2317 if c.__class__.class_traits(config=True) 

2318 ]), key=lambda x: x.__class__.__name__) 

2319 classnames = [ c.__class__.__name__ for c in classes ] 

2320 

2321 # return all classnames if config or %config is given 

2322 if len(texts) == 1: 

2323 return classnames 

2324 

2325 # match classname 

2326 classname_texts = texts[1].split('.') 

2327 classname = classname_texts[0] 

2328 classname_matches = [ c for c in classnames 

2329 if c.startswith(classname) ] 

2330 

2331 # return matched classes or the matched class with attributes 

2332 if texts[1].find('.') < 0: 

2333 return classname_matches 

2334 elif len(classname_matches) == 1 and \ 

2335 classname_matches[0] == classname: 

2336 cls = classes[classnames.index(classname)].__class__ 

2337 help = cls.class_get_help() 

2338 # strip leading '--' from cl-args: 

2339 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help) 

2340 return [ attr.split('=')[0] 

2341 for attr in help.strip().splitlines() 

2342 if attr.startswith(texts[1]) ] 

2343 return [] 

2344 

2345 @context_matcher() 

2346 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2347 """Match color schemes for %colors magic.""" 

2348 text = context.line_with_cursor 

2349 texts = text.split() 

2350 if text.endswith(' '): 

2351 # .split() strips off the trailing whitespace. Add '' back 

2352 # so that: '%colors ' -> ['%colors', ''] 

2353 texts.append('') 

2354 

2355 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'): 

2356 prefix = texts[1] 

2357 return SimpleMatcherResult( 

2358 completions=[ 

2359 SimpleCompletion(color, type="param") 

2360 for color in theme_table.keys() 

2361 if color.startswith(prefix) 

2362 ], 

2363 suppress=False, 

2364 ) 

2365 return SimpleMatcherResult( 

2366 completions=[], 

2367 suppress=False, 

2368 ) 

2369 

2370 @context_matcher(identifier="IPCompleter.jedi_matcher") 

2371 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult: 

2372 matches = self._jedi_matches( 

2373 cursor_column=context.cursor_position, 

2374 cursor_line=context.cursor_line, 

2375 text=context.full_text, 

2376 ) 

2377 return { 

2378 "completions": matches, 

2379 # static analysis should not suppress other matchers 

2380 "suppress": False, 

2381 } 

2382 

2383 def _jedi_matches( 

2384 self, cursor_column: int, cursor_line: int, text: str 

2385 ) -> Iterator[_JediCompletionLike]: 

2386 """ 

2387 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and 

2388 cursor position. 

2389 

2390 Parameters 

2391 ---------- 

2392 cursor_column : int 

2393 column position of the cursor in ``text``, 0-indexed. 

2394 cursor_line : int 

2395 line position of the cursor in ``text``, 0-indexed 

2396 text : str 

2397 text to complete 

2398 

2399 Notes 

2400 ----- 

2401 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion` 

2402 object containing a string with the Jedi debug information attached. 

2403 

2404 .. deprecated:: 8.6 

2405 You can use :meth:`_jedi_matcher` instead. 

2406 """ 

2407 namespaces = [self.namespace] 

2408 if self.global_namespace is not None: 

2409 namespaces.append(self.global_namespace) 

2410 

2411 completion_filter = lambda x:x 

2412 offset = cursor_to_position(text, cursor_line, cursor_column) 

2413 # filter output if we are completing for object members 

2414 if offset: 

2415 pre = text[offset-1] 

2416 if pre == '.': 

2417 if self.omit__names == 2: 

2418 completion_filter = lambda c:not c.name.startswith('_') 

2419 elif self.omit__names == 1: 

2420 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__')) 

2421 elif self.omit__names == 0: 

2422 completion_filter = lambda x:x 

2423 else: 

2424 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names)) 

2425 

2426 interpreter = jedi.Interpreter(text[:offset], namespaces) 

2427 try_jedi = True 

2428 

2429 try: 

2430 # find the first token in the current tree -- if it is a ' or " then we are in a string 

2431 completing_string = False 

2432 try: 

2433 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value')) 

2434 except StopIteration: 

2435 pass 

2436 else: 

2437 # note the value may be ', ", or it may also be ''' or """, or 

2438 # in some cases, """what/you/typed..., but all of these are 

2439 # strings. 

2440 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'} 

2441 

2442 # if we are in a string jedi is likely not the right candidate for 

2443 # now. Skip it. 

2444 try_jedi = not completing_string 

2445 except Exception as e: 

2446 # many of things can go wrong, we are using private API just don't crash. 

2447 if self.debug: 

2448 print("Error detecting if completing a non-finished string :", e, '|') 

2449 

2450 if not try_jedi: 

2451 return iter([]) 

2452 try: 

2453 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1)) 

2454 except Exception as e: 

2455 if self.debug: 

2456 return iter( 

2457 [ 

2458 _FakeJediCompletion( 

2459 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' 

2460 % (e) 

2461 ) 

2462 ] 

2463 ) 

2464 else: 

2465 return iter([]) 

2466 

2467 class _CompletionContextType(enum.Enum): 

2468 ATTRIBUTE = "attribute" # For attribute completion 

2469 GLOBAL = "global" # For global completion 

2470 

2471 def _determine_completion_context(self, line): 

2472 """ 

2473 Determine whether the cursor is in an attribute or global completion context. 

2474 """ 

2475 # Cursor in string/comment → GLOBAL. 

2476 is_string, is_in_expression = self._is_in_string_or_comment(line) 

2477 if is_string and not is_in_expression: 

2478 return self._CompletionContextType.GLOBAL 

2479 

2480 # If we're in a template string expression, handle specially 

2481 if is_string and is_in_expression: 

2482 # Extract the expression part - look for the last { that isn't closed 

2483 expr_start = line.rfind("{") 

2484 if expr_start >= 0: 

2485 # We're looking at the expression inside a template string 

2486 expr = line[expr_start + 1 :] 

2487 # Recursively determine the context of the expression 

2488 return self._determine_completion_context(expr) 

2489 

2490 # Handle plain number literals - should be global context 

2491 # Ex: 3. -42.14 but not 3.1. 

2492 if re.search(r"(?<!\w)(?<!\d\.)([-+]?\d+\.(\d+)?)(?!\w)$", line): 

2493 return self._CompletionContextType.GLOBAL 

2494 

2495 # Handle all other attribute matches np.ran, d[0].k, (a,b).count 

2496 chain_match = re.search(r".*(.+\.(?:[a-zA-Z]\w*)?)$", line) 

2497 if chain_match: 

2498 return self._CompletionContextType.ATTRIBUTE 

2499 

2500 return self._CompletionContextType.GLOBAL 

2501 

2502 def _is_in_string_or_comment(self, text): 

2503 """ 

2504 Determine if the cursor is inside a string or comment. 

2505 Returns (is_string, is_in_expression) tuple: 

2506 - is_string: True if in any kind of string 

2507 - is_in_expression: True if inside an f-string/t-string expression 

2508 """ 

2509 in_single_quote = False 

2510 in_double_quote = False 

2511 in_triple_single = False 

2512 in_triple_double = False 

2513 in_template_string = False # Covers both f-strings and t-strings 

2514 in_expression = False # For expressions in f/t-strings 

2515 expression_depth = 0 # Track nested braces in expressions 

2516 i = 0 

2517 

2518 while i < len(text): 

2519 # Check for f-string or t-string start 

2520 if ( 

2521 i + 1 < len(text) 

2522 and text[i] in ("f", "t") 

2523 and (text[i + 1] == '"' or text[i + 1] == "'") 

2524 and not ( 

2525 in_single_quote 

2526 or in_double_quote 

2527 or in_triple_single 

2528 or in_triple_double 

2529 ) 

2530 ): 

2531 in_template_string = True 

2532 i += 1 # Skip the 'f' or 't' 

2533 

2534 # Handle triple quotes 

2535 if i + 2 < len(text): 

2536 if ( 

2537 text[i : i + 3] == '"""' 

2538 and not in_single_quote 

2539 and not in_triple_single 

2540 ): 

2541 in_triple_double = not in_triple_double 

2542 if not in_triple_double: 

2543 in_template_string = False 

2544 i += 3 

2545 continue 

2546 if ( 

2547 text[i : i + 3] == "'''" 

2548 and not in_double_quote 

2549 and not in_triple_double 

2550 ): 

2551 in_triple_single = not in_triple_single 

2552 if not in_triple_single: 

2553 in_template_string = False 

2554 i += 3 

2555 continue 

2556 

2557 # Handle escapes 

2558 if text[i] == "\\" and i + 1 < len(text): 

2559 i += 2 

2560 continue 

2561 

2562 # Handle nested braces within f-strings 

2563 if in_template_string: 

2564 # Special handling for consecutive opening braces 

2565 if i + 1 < len(text) and text[i : i + 2] == "{{": 

2566 i += 2 

2567 continue 

2568 

2569 # Detect start of an expression 

2570 if text[i] == "{": 

2571 # Only increment depth and mark as expression if not already in an expression 

2572 # or if we're at a top-level nested brace 

2573 if not in_expression or (in_expression and expression_depth == 0): 

2574 in_expression = True 

2575 expression_depth += 1 

2576 i += 1 

2577 continue 

2578 

2579 # Detect end of an expression 

2580 if text[i] == "}": 

2581 expression_depth -= 1 

2582 if expression_depth <= 0: 

2583 in_expression = False 

2584 expression_depth = 0 

2585 i += 1 

2586 continue 

2587 

2588 in_triple_quote = in_triple_single or in_triple_double 

2589 

2590 # Handle quotes - also reset template string when closing quotes are encountered 

2591 if text[i] == '"' and not in_single_quote and not in_triple_quote: 

2592 in_double_quote = not in_double_quote 

2593 if not in_double_quote and not in_triple_quote: 

2594 in_template_string = False 

2595 elif text[i] == "'" and not in_double_quote and not in_triple_quote: 

2596 in_single_quote = not in_single_quote 

2597 if not in_single_quote and not in_triple_quote: 

2598 in_template_string = False 

2599 

2600 # Check for comment 

2601 if text[i] == "#" and not ( 

2602 in_single_quote or in_double_quote or in_triple_quote 

2603 ): 

2604 return True, False 

2605 

2606 i += 1 

2607 

2608 is_string = ( 

2609 in_single_quote or in_double_quote or in_triple_single or in_triple_double 

2610 ) 

2611 

2612 # Return tuple (is_string, is_in_expression) 

2613 return ( 

2614 is_string or (in_template_string and not in_expression), 

2615 in_expression and expression_depth > 0, 

2616 ) 

2617 

2618 @context_matcher() 

2619 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2620 """Match attributes or global python names""" 

2621 text = context.text_until_cursor 

2622 completion_type = self._determine_completion_context(text) 

2623 if completion_type == self._CompletionContextType.ATTRIBUTE: 

2624 try: 

2625 matches, fragment = self._attr_matches(text, include_prefix=False) 

2626 if text.endswith(".") and self.omit__names: 

2627 if self.omit__names == 1: 

2628 # true if txt is _not_ a __ name, false otherwise: 

2629 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None 

2630 else: 

2631 # true if txt is _not_ a _ name, false otherwise: 

2632 no__name = ( 

2633 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :]) 

2634 is None 

2635 ) 

2636 matches = filter(no__name, matches) 

2637 return _convert_matcher_v1_result_to_v2( 

2638 matches, type="attribute", fragment=fragment 

2639 ) 

2640 except NameError: 

2641 # catches <undefined attributes>.<tab> 

2642 return SimpleMatcherResult(completions=[], suppress=False) 

2643 else: 

2644 matches = self.global_matches(context.token) 

2645 # TODO: maybe distinguish between functions, modules and just "variables" 

2646 return SimpleMatcherResult( 

2647 completions=[ 

2648 SimpleCompletion(text=match, type="variable") for match in matches 

2649 ], 

2650 suppress=False, 

2651 ) 

2652 

2653 @completion_matcher(api_version=1) 

2654 def python_matches(self, text: str) -> Iterable[str]: 

2655 """Match attributes or global python names. 

2656 

2657 .. deprecated:: 8.27 

2658 You can use :meth:`python_matcher` instead.""" 

2659 if "." in text: 

2660 try: 

2661 matches = self.attr_matches(text) 

2662 if text.endswith('.') and self.omit__names: 

2663 if self.omit__names == 1: 

2664 # true if txt is _not_ a __ name, false otherwise: 

2665 no__name = (lambda txt: 

2666 re.match(r'.*\.__.*?__',txt) is None) 

2667 else: 

2668 # true if txt is _not_ a _ name, false otherwise: 

2669 no__name = (lambda txt: 

2670 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None) 

2671 matches = filter(no__name, matches) 

2672 except NameError: 

2673 # catches <undefined attributes>.<tab> 

2674 matches = [] 

2675 else: 

2676 matches = self.global_matches(text) 

2677 return matches 

2678 

2679 def _default_arguments_from_docstring(self, doc): 

2680 """Parse the first line of docstring for call signature. 

2681 

2682 Docstring should be of the form 'min(iterable[, key=func])\n'. 

2683 It can also parse cython docstring of the form 

2684 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'. 

2685 """ 

2686 if doc is None: 

2687 return [] 

2688 

2689 #care only the firstline 

2690 line = doc.lstrip().splitlines()[0] 

2691 

2692 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2693 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]' 

2694 sig = self.docstring_sig_re.search(line) 

2695 if sig is None: 

2696 return [] 

2697 # iterable[, key=func]' -> ['iterable[' ,' key=func]'] 

2698 sig = sig.groups()[0].split(',') 

2699 ret = [] 

2700 for s in sig: 

2701 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2702 ret += self.docstring_kwd_re.findall(s) 

2703 return ret 

2704 

2705 def _default_arguments(self, obj): 

2706 """Return the list of default arguments of obj if it is callable, 

2707 or empty list otherwise.""" 

2708 call_obj = obj 

2709 ret = [] 

2710 if inspect.isbuiltin(obj): 

2711 pass 

2712 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)): 

2713 if inspect.isclass(obj): 

2714 #for cython embedsignature=True the constructor docstring 

2715 #belongs to the object itself not __init__ 

2716 ret += self._default_arguments_from_docstring( 

2717 getattr(obj, '__doc__', '')) 

2718 # for classes, check for __init__,__new__ 

2719 call_obj = (getattr(obj, '__init__', None) or 

2720 getattr(obj, '__new__', None)) 

2721 # for all others, check if they are __call__able 

2722 elif hasattr(obj, '__call__'): 

2723 call_obj = obj.__call__ 

2724 ret += self._default_arguments_from_docstring( 

2725 getattr(call_obj, '__doc__', '')) 

2726 

2727 _keeps = (inspect.Parameter.KEYWORD_ONLY, 

2728 inspect.Parameter.POSITIONAL_OR_KEYWORD) 

2729 

2730 try: 

2731 sig = inspect.signature(obj) 

2732 ret.extend(k for k, v in sig.parameters.items() if 

2733 v.kind in _keeps) 

2734 except ValueError: 

2735 pass 

2736 

2737 return list(set(ret)) 

2738 

2739 @context_matcher() 

2740 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2741 """Match named parameters (kwargs) of the last open function.""" 

2742 matches = self.python_func_kw_matches(context.token) 

2743 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2744 

2745 def python_func_kw_matches(self, text): 

2746 """Match named parameters (kwargs) of the last open function. 

2747 

2748 .. deprecated:: 8.6 

2749 You can use :meth:`python_func_kw_matcher` instead. 

2750 """ 

2751 

2752 if "." in text: # a parameter cannot be dotted 

2753 return [] 

2754 try: regexp = self.__funcParamsRegex 

2755 except AttributeError: 

2756 regexp = self.__funcParamsRegex = re.compile(r''' 

2757 '.*?(?<!\\)' | # single quoted strings or 

2758 ".*?(?<!\\)" | # double quoted strings or 

2759 \w+ | # identifier 

2760 \S # other characters 

2761 ''', re.VERBOSE | re.DOTALL) 

2762 # 1. find the nearest identifier that comes before an unclosed 

2763 # parenthesis before the cursor 

2764 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo" 

2765 tokens = regexp.findall(self.text_until_cursor) 

2766 iterTokens = reversed(tokens) 

2767 openPar = 0 

2768 

2769 for token in iterTokens: 

2770 if token == ')': 

2771 openPar -= 1 

2772 elif token == '(': 

2773 openPar += 1 

2774 if openPar > 0: 

2775 # found the last unclosed parenthesis 

2776 break 

2777 else: 

2778 return [] 

2779 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" ) 

2780 ids = [] 

2781 isId = re.compile(r'\w+$').match 

2782 

2783 while True: 

2784 try: 

2785 ids.append(next(iterTokens)) 

2786 if not isId(ids[-1]): 

2787 ids.pop() 

2788 break 

2789 if not next(iterTokens) == '.': 

2790 break 

2791 except StopIteration: 

2792 break 

2793 

2794 # Find all named arguments already assigned to, as to avoid suggesting 

2795 # them again 

2796 usedNamedArgs = set() 

2797 par_level = -1 

2798 for token, next_token in zip(tokens, tokens[1:]): 

2799 if token == '(': 

2800 par_level += 1 

2801 elif token == ')': 

2802 par_level -= 1 

2803 

2804 if par_level != 0: 

2805 continue 

2806 

2807 if next_token != '=': 

2808 continue 

2809 

2810 usedNamedArgs.add(token) 

2811 

2812 argMatches = [] 

2813 try: 

2814 callableObj = '.'.join(ids[::-1]) 

2815 namedArgs = self._default_arguments(eval(callableObj, 

2816 self.namespace)) 

2817 

2818 # Remove used named arguments from the list, no need to show twice 

2819 for namedArg in set(namedArgs) - usedNamedArgs: 

2820 if namedArg.startswith(text): 

2821 argMatches.append("%s=" %namedArg) 

2822 except: 

2823 pass 

2824 

2825 return argMatches 

2826 

2827 @staticmethod 

2828 def _get_keys(obj: Any) -> list[Any]: 

2829 # Objects can define their own completions by defining an 

2830 # _ipy_key_completions_() method. 

2831 method = get_real_method(obj, '_ipython_key_completions_') 

2832 if method is not None: 

2833 return method() 

2834 

2835 # Special case some common in-memory dict-like types 

2836 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"): 

2837 try: 

2838 return list(obj.keys()) 

2839 except Exception: 

2840 return [] 

2841 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"): 

2842 try: 

2843 return list(obj.obj.keys()) 

2844 except Exception: 

2845 return [] 

2846 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\ 

2847 _safe_isinstance(obj, 'numpy', 'void'): 

2848 return obj.dtype.names or [] 

2849 return [] 

2850 

2851 @context_matcher() 

2852 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2853 """Match string keys in a dictionary, after e.g. ``foo[``.""" 

2854 matches = self.dict_key_matches(context.token) 

2855 return _convert_matcher_v1_result_to_v2( 

2856 matches, type="dict key", suppress_if_matches=True 

2857 ) 

2858 

2859 def dict_key_matches(self, text: str) -> list[str]: 

2860 """Match string keys in a dictionary, after e.g. ``foo[``. 

2861 

2862 .. deprecated:: 8.6 

2863 You can use :meth:`dict_key_matcher` instead. 

2864 """ 

2865 

2866 # Short-circuit on closed dictionary (regular expression would 

2867 # not match anyway, but would take quite a while). 

2868 if self.text_until_cursor.strip().endswith("]"): 

2869 return [] 

2870 

2871 match = DICT_MATCHER_REGEX.search(self.text_until_cursor) 

2872 

2873 if match is None: 

2874 return [] 

2875 

2876 expr, prior_tuple_keys, key_prefix = match.groups() 

2877 

2878 obj = self._evaluate_expr(expr) 

2879 

2880 if obj is not_found: 

2881 return [] 

2882 

2883 keys = self._get_keys(obj) 

2884 if not keys: 

2885 return keys 

2886 

2887 tuple_prefix = guarded_eval( 

2888 prior_tuple_keys, 

2889 EvaluationContext( 

2890 globals=self.global_namespace, 

2891 locals=self.namespace, 

2892 evaluation=self.evaluation, # type: ignore 

2893 in_subscript=True, 

2894 auto_import=self._auto_import, 

2895 policy_overrides=self.policy_overrides, 

2896 ), 

2897 ) 

2898 

2899 closing_quote, token_offset, matches = match_dict_keys( 

2900 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix 

2901 ) 

2902 if not matches: 

2903 return [] 

2904 

2905 # get the cursor position of 

2906 # - the text being completed 

2907 # - the start of the key text 

2908 # - the start of the completion 

2909 text_start = len(self.text_until_cursor) - len(text) 

2910 if key_prefix: 

2911 key_start = match.start(3) 

2912 completion_start = key_start + token_offset 

2913 else: 

2914 key_start = completion_start = match.end() 

2915 

2916 # grab the leading prefix, to make sure all completions start with `text` 

2917 if text_start > key_start: 

2918 leading = '' 

2919 else: 

2920 leading = text[text_start:completion_start] 

2921 

2922 # append closing quote and bracket as appropriate 

2923 # this is *not* appropriate if the opening quote or bracket is outside 

2924 # the text given to this method, e.g. `d["""a\nt 

2925 can_close_quote = False 

2926 can_close_bracket = False 

2927 

2928 continuation = self.line_buffer[len(self.text_until_cursor) :].strip() 

2929 

2930 if continuation.startswith(closing_quote): 

2931 # do not close if already closed, e.g. `d['a<tab>'` 

2932 continuation = continuation[len(closing_quote) :] 

2933 else: 

2934 can_close_quote = True 

2935 

2936 continuation = continuation.strip() 

2937 

2938 # e.g. `pandas.DataFrame` has different tuple indexer behaviour, 

2939 # handling it is out of scope, so let's avoid appending suffixes. 

2940 has_known_tuple_handling = isinstance(obj, dict) 

2941 

2942 can_close_bracket = ( 

2943 not continuation.startswith("]") and self.auto_close_dict_keys 

2944 ) 

2945 can_close_tuple_item = ( 

2946 not continuation.startswith(",") 

2947 and has_known_tuple_handling 

2948 and self.auto_close_dict_keys 

2949 ) 

2950 can_close_quote = can_close_quote and self.auto_close_dict_keys 

2951 

2952 # fast path if closing quote should be appended but not suffix is allowed 

2953 if not can_close_quote and not can_close_bracket and closing_quote: 

2954 return [leading + k for k in matches] 

2955 

2956 results = [] 

2957 

2958 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM 

2959 

2960 for k, state_flag in matches.items(): 

2961 result = leading + k 

2962 if can_close_quote and closing_quote: 

2963 result += closing_quote 

2964 

2965 if state_flag == end_of_tuple_or_item: 

2966 # We do not know which suffix to add, 

2967 # e.g. both tuple item and string 

2968 # match this item. 

2969 pass 

2970 

2971 if state_flag in end_of_tuple_or_item and can_close_bracket: 

2972 result += "]" 

2973 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item: 

2974 result += ", " 

2975 results.append(result) 

2976 return results 

2977 

2978 @context_matcher() 

2979 def unicode_name_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2980 """Match Latex-like syntax for unicode characters base 

2981 on the name of the character. 

2982 

2983 This does ``\\GREEK SMALL LETTER ETA`` -> ``η`` 

2984 

2985 Works only on valid python 3 identifier, or on combining characters that 

2986 will combine to form a valid identifier. 

2987 """ 

2988 

2989 text = context.text_until_cursor 

2990 

2991 slashpos = text.rfind('\\') 

2992 if slashpos > -1: 

2993 s = text[slashpos+1:] 

2994 try : 

2995 unic = unicodedata.lookup(s) 

2996 # allow combining chars 

2997 if ('a'+unic).isidentifier(): 

2998 return { 

2999 "completions": [SimpleCompletion(text=unic, type="unicode")], 

3000 "suppress": True, 

3001 "matched_fragment": "\\" + s, 

3002 } 

3003 except KeyError: 

3004 pass 

3005 return { 

3006 "completions": [], 

3007 "suppress": False, 

3008 } 

3009 

3010 @context_matcher() 

3011 def latex_name_matcher(self, context: CompletionContext): 

3012 """Match Latex syntax for unicode characters. 

3013 

3014 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3015 """ 

3016 fragment, matches = self.latex_matches(context.text_until_cursor) 

3017 return _convert_matcher_v1_result_to_v2( 

3018 matches, type="latex", fragment=fragment, suppress_if_matches=True 

3019 ) 

3020 

3021 def latex_matches(self, text: str) -> tuple[str, Sequence[str]]: 

3022 """Match Latex syntax for unicode characters. 

3023 

3024 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3025 

3026 .. deprecated:: 8.6 

3027 You can use :meth:`latex_name_matcher` instead. 

3028 """ 

3029 slashpos = text.rfind('\\') 

3030 if slashpos > -1: 

3031 s = text[slashpos:] 

3032 if s in latex_symbols: 

3033 # Try to complete a full latex symbol to unicode 

3034 # \\alpha -> α 

3035 return s, [latex_symbols[s]] 

3036 else: 

3037 # If a user has partially typed a latex symbol, give them 

3038 # a full list of options \al -> [\aleph, \alpha] 

3039 matches = [k for k in latex_symbols if k.startswith(s)] 

3040 if matches: 

3041 return s, matches 

3042 return '', () 

3043 

3044 @context_matcher() 

3045 def custom_completer_matcher(self, context): 

3046 """Dispatch custom completer. 

3047 

3048 If a match is found, suppresses all other matchers except for Jedi. 

3049 """ 

3050 matches = self.dispatch_custom_completer(context.token) or [] 

3051 result = _convert_matcher_v1_result_to_v2( 

3052 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True 

3053 ) 

3054 result["ordered"] = True 

3055 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)} 

3056 return result 

3057 

3058 def dispatch_custom_completer(self, text): 

3059 """ 

3060 .. deprecated:: 8.6 

3061 You can use :meth:`custom_completer_matcher` instead. 

3062 """ 

3063 if not self.custom_completers: 

3064 return 

3065 

3066 line = self.line_buffer 

3067 if not line.strip(): 

3068 return None 

3069 

3070 # Create a little structure to pass all the relevant information about 

3071 # the current completion to any custom completer. 

3072 event = SimpleNamespace() 

3073 event.line = line 

3074 event.symbol = text 

3075 cmd = line.split(None,1)[0] 

3076 event.command = cmd 

3077 event.text_until_cursor = self.text_until_cursor 

3078 

3079 # for foo etc, try also to find completer for %foo 

3080 if not cmd.startswith(self.magic_escape): 

3081 try_magic = self.custom_completers.s_matches( 

3082 self.magic_escape + cmd) 

3083 else: 

3084 try_magic = [] 

3085 

3086 for c in itertools.chain(self.custom_completers.s_matches(cmd), 

3087 try_magic, 

3088 self.custom_completers.flat_matches(self.text_until_cursor)): 

3089 try: 

3090 res = c(event) 

3091 if res: 

3092 # first, try case sensitive match 

3093 withcase = [r for r in res if r.startswith(text)] 

3094 if withcase: 

3095 return withcase 

3096 # if none, then case insensitive ones are ok too 

3097 text_low = text.lower() 

3098 return [r for r in res if r.lower().startswith(text_low)] 

3099 except TryNext: 

3100 pass 

3101 except KeyboardInterrupt: 

3102 """ 

3103 If custom completer take too long, 

3104 let keyboard interrupt abort and return nothing. 

3105 """ 

3106 break 

3107 

3108 return None 

3109 

3110 def completions(self, text: str, offset: int)->Iterator[Completion]: 

3111 """ 

3112 Returns an iterator over the possible completions 

3113 

3114 .. warning:: 

3115 

3116 Unstable 

3117 

3118 This function is unstable, API may change without warning. 

3119 It will also raise unless use in proper context manager. 

3120 

3121 Parameters 

3122 ---------- 

3123 text : str 

3124 Full text of the current input, multi line string. 

3125 offset : int 

3126 Integer representing the position of the cursor in ``text``. Offset 

3127 is 0-based indexed. 

3128 

3129 Yields 

3130 ------ 

3131 Completion 

3132 

3133 Notes 

3134 ----- 

3135 The cursor on a text can either be seen as being "in between" 

3136 characters or "On" a character depending on the interface visible to 

3137 the user. For consistency the cursor being on "in between" characters X 

3138 and Y is equivalent to the cursor being "on" character Y, that is to say 

3139 the character the cursor is on is considered as being after the cursor. 

3140 

3141 Combining characters may span more that one position in the 

3142 text. 

3143 

3144 .. note:: 

3145 

3146 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--`` 

3147 fake Completion token to distinguish completion returned by Jedi 

3148 and usual IPython completion. 

3149 

3150 .. note:: 

3151 

3152 Completions are not completely deduplicated yet. If identical 

3153 completions are coming from different sources this function does not 

3154 ensure that each completion object will only be present once. 

3155 """ 

3156 warnings.warn("_complete is a provisional API (as of IPython 6.0). " 

3157 "It may change without warnings. " 

3158 "Use in corresponding context manager.", 

3159 category=ProvisionalCompleterWarning, stacklevel=2) 

3160 

3161 seen = set() 

3162 profiler:Optional[cProfile.Profile] 

3163 try: 

3164 if self.profile_completions: 

3165 import cProfile 

3166 profiler = cProfile.Profile() 

3167 profiler.enable() 

3168 else: 

3169 profiler = None 

3170 

3171 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000): 

3172 if c and (c in seen): 

3173 continue 

3174 yield c 

3175 seen.add(c) 

3176 except KeyboardInterrupt: 

3177 """if completions take too long and users send keyboard interrupt, 

3178 do not crash and return ASAP. """ 

3179 pass 

3180 finally: 

3181 if profiler is not None: 

3182 profiler.disable() 

3183 ensure_dir_exists(self.profiler_output_dir) 

3184 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4())) 

3185 print("Writing profiler output to", output_path) 

3186 profiler.dump_stats(output_path) 

3187 

3188 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]: 

3189 """ 

3190 Core completion module.Same signature as :any:`completions`, with the 

3191 extra `timeout` parameter (in seconds). 

3192 

3193 Computing jedi's completion ``.type`` can be quite expensive (it is a 

3194 lazy property) and can require some warm-up, more warm up than just 

3195 computing the ``name`` of a completion. The warm-up can be : 

3196 

3197 - Long warm-up the first time a module is encountered after 

3198 install/update: actually build parse/inference tree. 

3199 

3200 - first time the module is encountered in a session: load tree from 

3201 disk. 

3202 

3203 We don't want to block completions for tens of seconds so we give the 

3204 completer a "budget" of ``_timeout`` seconds per invocation to compute 

3205 completions types, the completions that have not yet been computed will 

3206 be marked as "unknown" an will have a chance to be computed next round 

3207 are things get cached. 

3208 

3209 Keep in mind that Jedi is not the only thing treating the completion so 

3210 keep the timeout short-ish as if we take more than 0.3 second we still 

3211 have lots of processing to do. 

3212 

3213 """ 

3214 deadline = time.monotonic() + _timeout 

3215 

3216 before = full_text[:offset] 

3217 cursor_line, cursor_column = position_to_cursor(full_text, offset) 

3218 

3219 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3220 

3221 def is_non_jedi_result( 

3222 result: MatcherResult, identifier: str 

3223 ) -> TypeGuard[SimpleMatcherResult]: 

3224 return identifier != jedi_matcher_id 

3225 

3226 results = self._complete( 

3227 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column 

3228 ) 

3229 

3230 non_jedi_results: dict[str, SimpleMatcherResult] = { 

3231 identifier: result 

3232 for identifier, result in results.items() 

3233 if is_non_jedi_result(result, identifier) 

3234 } 

3235 

3236 jedi_matches = ( 

3237 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"] 

3238 if jedi_matcher_id in results 

3239 else () 

3240 ) 

3241 

3242 iter_jm = iter(jedi_matches) 

3243 if _timeout: 

3244 for jm in iter_jm: 

3245 try: 

3246 type_ = jm.type 

3247 except Exception: 

3248 if self.debug: 

3249 print("Error in Jedi getting type of ", jm) 

3250 type_ = None 

3251 delta = len(jm.name_with_symbols) - len(jm.complete) 

3252 if type_ == 'function': 

3253 signature = _make_signature(jm) 

3254 else: 

3255 signature = '' 

3256 yield Completion(start=offset - delta, 

3257 end=offset, 

3258 text=jm.name_with_symbols, 

3259 type=type_, 

3260 signature=signature, 

3261 _origin='jedi') 

3262 

3263 if time.monotonic() > deadline: 

3264 break 

3265 

3266 for jm in iter_jm: 

3267 delta = len(jm.name_with_symbols) - len(jm.complete) 

3268 yield Completion( 

3269 start=offset - delta, 

3270 end=offset, 

3271 text=jm.name_with_symbols, 

3272 type=_UNKNOWN_TYPE, # don't compute type for speed 

3273 _origin="jedi", 

3274 signature="", 

3275 ) 

3276 

3277 # TODO: 

3278 # Suppress this, right now just for debug. 

3279 if jedi_matches and non_jedi_results and self.debug: 

3280 some_start_offset = before.rfind( 

3281 next(iter(non_jedi_results.values()))["matched_fragment"] 

3282 ) 

3283 yield Completion( 

3284 start=some_start_offset, 

3285 end=offset, 

3286 text="--jedi/ipython--", 

3287 _origin="debug", 

3288 type="none", 

3289 signature="", 

3290 ) 

3291 

3292 ordered: list[Completion] = [] 

3293 sortable: list[Completion] = [] 

3294 

3295 for origin, result in non_jedi_results.items(): 

3296 matched_text = result["matched_fragment"] 

3297 start_offset = before.rfind(matched_text) 

3298 is_ordered = result.get("ordered", False) 

3299 container = ordered if is_ordered else sortable 

3300 

3301 # I'm unsure if this is always true, so let's assert and see if it 

3302 # crash 

3303 assert before.endswith(matched_text) 

3304 

3305 for simple_completion in result["completions"]: 

3306 completion = Completion( 

3307 start=start_offset, 

3308 end=offset, 

3309 text=simple_completion.text, 

3310 _origin=origin, 

3311 signature="", 

3312 type=simple_completion.type or _UNKNOWN_TYPE, 

3313 ) 

3314 container.append(completion) 

3315 

3316 yield from list(self._deduplicate(ordered + self._sort(sortable)))[ 

3317 :MATCHES_LIMIT 

3318 ] 

3319 

3320 def complete( 

3321 self, text=None, line_buffer=None, cursor_pos=None 

3322 ) -> tuple[str, Sequence[str]]: 

3323 """Find completions for the given text and line context. 

3324 

3325 Note that both the text and the line_buffer are optional, but at least 

3326 one of them must be given. 

3327 

3328 Parameters 

3329 ---------- 

3330 text : string, optional 

3331 Text to perform the completion on. If not given, the line buffer 

3332 is split using the instance's CompletionSplitter object. 

3333 line_buffer : string, optional 

3334 If not given, the completer attempts to obtain the current line 

3335 buffer via readline. This keyword allows clients which are 

3336 requesting for text completions in non-readline contexts to inform 

3337 the completer of the entire text. 

3338 cursor_pos : int, optional 

3339 Index of the cursor in the full line buffer. Should be provided by 

3340 remote frontends where kernel has no access to frontend state. 

3341 

3342 Returns 

3343 ------- 

3344 Tuple of two items: 

3345 text : str 

3346 Text that was actually used in the completion. 

3347 matches : list 

3348 A list of completion matches. 

3349 

3350 Notes 

3351 ----- 

3352 This API is likely to be deprecated and replaced by 

3353 :any:`IPCompleter.completions` in the future. 

3354 

3355 """ 

3356 warnings.warn('`Completer.complete` is pending deprecation since ' 

3357 'IPython 6.0 and will be replaced by `Completer.completions`.', 

3358 PendingDeprecationWarning) 

3359 # potential todo, FOLD the 3rd throw away argument of _complete 

3360 # into the first 2 one. 

3361 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?) 

3362 # TODO: should we deprecate now, or does it stay? 

3363 

3364 results = self._complete( 

3365 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0 

3366 ) 

3367 

3368 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3369 

3370 return self._arrange_and_extract( 

3371 results, 

3372 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version? 

3373 skip_matchers={jedi_matcher_id}, 

3374 # this API does not support different start/end positions (fragments of token). 

3375 abort_if_offset_changes=True, 

3376 ) 

3377 

3378 def _arrange_and_extract( 

3379 self, 

3380 results: dict[str, MatcherResult], 

3381 skip_matchers: set[str], 

3382 abort_if_offset_changes: bool, 

3383 ): 

3384 sortable: list[AnyMatcherCompletion] = [] 

3385 ordered: list[AnyMatcherCompletion] = [] 

3386 most_recent_fragment = None 

3387 for identifier, result in results.items(): 

3388 if identifier in skip_matchers: 

3389 continue 

3390 if not result["completions"]: 

3391 continue 

3392 if not most_recent_fragment: 

3393 most_recent_fragment = result["matched_fragment"] 

3394 if ( 

3395 abort_if_offset_changes 

3396 and result["matched_fragment"] != most_recent_fragment 

3397 ): 

3398 break 

3399 if result.get("ordered", False): 

3400 ordered.extend(result["completions"]) 

3401 else: 

3402 sortable.extend(result["completions"]) 

3403 

3404 if not most_recent_fragment: 

3405 most_recent_fragment = "" # to satisfy typechecker (and just in case) 

3406 

3407 return most_recent_fragment, [ 

3408 m.text for m in self._deduplicate(ordered + self._sort(sortable)) 

3409 ] 

3410 

3411 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None, 

3412 full_text=None) -> _CompleteResult: 

3413 """ 

3414 Like complete but can also returns raw jedi completions as well as the 

3415 origin of the completion text. This could (and should) be made much 

3416 cleaner but that will be simpler once we drop the old (and stateful) 

3417 :any:`complete` API. 

3418 

3419 With current provisional API, cursor_pos act both (depending on the 

3420 caller) as the offset in the ``text`` or ``line_buffer``, or as the 

3421 ``column`` when passing multiline strings this could/should be renamed 

3422 but would add extra noise. 

3423 

3424 Parameters 

3425 ---------- 

3426 cursor_line 

3427 Index of the line the cursor is on. 0 indexed. 

3428 cursor_pos 

3429 Position of the cursor in the current line/line_buffer/text. 0 

3430 indexed. 

3431 line_buffer : optional, str 

3432 The current line the cursor is in, this is mostly due to legacy 

3433 reason that readline could only give a us the single current line. 

3434 Prefer `full_text`. 

3435 text : str 

3436 The current "token" the cursor is in, mostly also for historical 

3437 reasons. as the completer would trigger only after the current line 

3438 was parsed. 

3439 full_text : str 

3440 Full text of the current cell. 

3441 

3442 Returns 

3443 ------- 

3444 An ordered dictionary where keys are identifiers of completion 

3445 matchers and values are ``MatcherResult``s. 

3446 """ 

3447 

3448 # if the cursor position isn't given, the only sane assumption we can 

3449 # make is that it's at the end of the line (the common case) 

3450 if cursor_pos is None: 

3451 cursor_pos = len(line_buffer) if text is None else len(text) 

3452 

3453 if self.use_main_ns: 

3454 self.namespace = __main__.__dict__ 

3455 

3456 # if text is either None or an empty string, rely on the line buffer 

3457 if (not line_buffer) and full_text: 

3458 line_buffer = full_text.split('\n')[cursor_line] 

3459 if not text: # issue #11508: check line_buffer before calling split_line 

3460 text = ( 

3461 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else "" 

3462 ) 

3463 

3464 # If no line buffer is given, assume the input text is all there was 

3465 if line_buffer is None: 

3466 line_buffer = text 

3467 

3468 # deprecated - do not use `line_buffer` in new code. 

3469 self.line_buffer = line_buffer 

3470 self.text_until_cursor = self.line_buffer[:cursor_pos] 

3471 

3472 if not full_text: 

3473 full_text = line_buffer 

3474 

3475 context = CompletionContext( 

3476 full_text=full_text, 

3477 cursor_position=cursor_pos, 

3478 cursor_line=cursor_line, 

3479 token=text, 

3480 limit=MATCHES_LIMIT, 

3481 ) 

3482 

3483 # Start with a clean slate of completions 

3484 results: dict[str, MatcherResult] = {} 

3485 

3486 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3487 

3488 suppressed_matchers: set[str] = set() 

3489 

3490 matchers = { 

3491 _get_matcher_id(matcher): matcher 

3492 for matcher in sorted( 

3493 self.matchers, key=_get_matcher_priority, reverse=True 

3494 ) 

3495 } 

3496 

3497 for matcher_id, matcher in matchers.items(): 

3498 matcher_id = _get_matcher_id(matcher) 

3499 

3500 if matcher_id in self.disable_matchers: 

3501 continue 

3502 

3503 if matcher_id in results: 

3504 warnings.warn(f"Duplicate matcher ID: {matcher_id}.") 

3505 

3506 if matcher_id in suppressed_matchers: 

3507 continue 

3508 

3509 result: MatcherResult 

3510 try: 

3511 if _is_matcher_v1(matcher): 

3512 result = _convert_matcher_v1_result_to_v2_no_no( 

3513 matcher(text), type=_UNKNOWN_TYPE 

3514 ) 

3515 elif _is_matcher_v2(matcher): 

3516 result = matcher(context) 

3517 else: 

3518 api_version = _get_matcher_api_version(matcher) 

3519 raise ValueError(f"Unsupported API version {api_version}") 

3520 except BaseException: 

3521 # Show the ugly traceback if the matcher causes an 

3522 # exception, but do NOT crash the kernel! 

3523 sys.excepthook(*sys.exc_info()) 

3524 continue 

3525 

3526 # set default value for matched fragment if suffix was not selected. 

3527 result["matched_fragment"] = result.get("matched_fragment", context.token) 

3528 

3529 if not suppressed_matchers: 

3530 suppression_recommended: Union[bool, set[str]] = result.get( 

3531 "suppress", False 

3532 ) 

3533 

3534 suppression_config = ( 

3535 self.suppress_competing_matchers.get(matcher_id, None) 

3536 if isinstance(self.suppress_competing_matchers, dict) 

3537 else self.suppress_competing_matchers 

3538 ) 

3539 should_suppress = ( 

3540 (suppression_config is True) 

3541 or (suppression_recommended and (suppression_config is not False)) 

3542 ) and has_any_completions(result) 

3543 

3544 if should_suppress: 

3545 suppression_exceptions: set[str] = result.get( 

3546 "do_not_suppress", set() 

3547 ) 

3548 if isinstance(suppression_recommended, Iterable): 

3549 to_suppress = set(suppression_recommended) 

3550 else: 

3551 to_suppress = set(matchers) 

3552 suppressed_matchers = to_suppress - suppression_exceptions 

3553 

3554 new_results = {} 

3555 for previous_matcher_id, previous_result in results.items(): 

3556 if previous_matcher_id not in suppressed_matchers: 

3557 new_results[previous_matcher_id] = previous_result 

3558 results = new_results 

3559 

3560 results[matcher_id] = result 

3561 

3562 _, matches = self._arrange_and_extract( 

3563 results, 

3564 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission? 

3565 # if it was omission, we can remove the filtering step, otherwise remove this comment. 

3566 skip_matchers={jedi_matcher_id}, 

3567 abort_if_offset_changes=False, 

3568 ) 

3569 

3570 # populate legacy stateful API 

3571 self.matches = matches 

3572 

3573 return results 

3574 

3575 @staticmethod 

3576 def _deduplicate( 

3577 matches: Sequence[AnyCompletion], 

3578 ) -> Iterable[AnyCompletion]: 

3579 filtered_matches: dict[str, AnyCompletion] = {} 

3580 for match in matches: 

3581 text = match.text 

3582 if ( 

3583 text not in filtered_matches 

3584 or filtered_matches[text].type == _UNKNOWN_TYPE 

3585 ): 

3586 filtered_matches[text] = match 

3587 

3588 return filtered_matches.values() 

3589 

3590 @staticmethod 

3591 def _sort(matches: Sequence[AnyCompletion]): 

3592 return sorted(matches, key=lambda x: completions_sorting_key(x.text)) 

3593 

3594 @context_matcher() 

3595 def fwd_unicode_matcher(self, context: CompletionContext): 

3596 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API.""" 

3597 # TODO: use `context.limit` to terminate early once we matched the maximum 

3598 # number that will be used downstream; can be added as an optional to 

3599 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here. 

3600 fragment, matches = self.fwd_unicode_match(context.text_until_cursor) 

3601 return _convert_matcher_v1_result_to_v2( 

3602 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

3603 ) 

3604 

3605 def fwd_unicode_match(self, text: str) -> tuple[str, Sequence[str]]: 

3606 """ 

3607 Forward match a string starting with a backslash with a list of 

3608 potential Unicode completions. 

3609 

3610 Will compute list of Unicode character names on first call and cache it. 

3611 

3612 .. deprecated:: 8.6 

3613 You can use :meth:`fwd_unicode_matcher` instead. 

3614 

3615 Returns 

3616 ------- 

3617 At tuple with: 

3618 - matched text (empty if no matches) 

3619 - list of potential completions, empty tuple otherwise) 

3620 """ 

3621 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements. 

3622 # We could do a faster match using a Trie. 

3623 

3624 # Using pygtrie the following seem to work: 

3625 

3626 # s = PrefixSet() 

3627 

3628 # for c in range(0,0x10FFFF + 1): 

3629 # try: 

3630 # s.add(unicodedata.name(chr(c))) 

3631 # except ValueError: 

3632 # pass 

3633 # [''.join(k) for k in s.iter(prefix)] 

3634 

3635 # But need to be timed and adds an extra dependency. 

3636 

3637 slashpos = text.rfind('\\') 

3638 # if text starts with slash 

3639 if slashpos > -1: 

3640 # PERF: It's important that we don't access self._unicode_names 

3641 # until we're inside this if-block. _unicode_names is lazily 

3642 # initialized, and it takes a user-noticeable amount of time to 

3643 # initialize it, so we don't want to initialize it unless we're 

3644 # actually going to use it. 

3645 s = text[slashpos + 1 :] 

3646 sup = s.upper() 

3647 candidates = [x for x in self.unicode_names if x.startswith(sup)] 

3648 if candidates: 

3649 return s, candidates 

3650 candidates = [x for x in self.unicode_names if sup in x] 

3651 if candidates: 

3652 return s, candidates 

3653 splitsup = sup.split(" ") 

3654 candidates = [ 

3655 x for x in self.unicode_names if all(u in x for u in splitsup) 

3656 ] 

3657 if candidates: 

3658 return s, candidates 

3659 

3660 return "", () 

3661 

3662 # if text does not start with slash 

3663 else: 

3664 return '', () 

3665 

3666 @property 

3667 def unicode_names(self) -> list[str]: 

3668 """List of names of unicode code points that can be completed. 

3669 

3670 The list is lazily initialized on first access. 

3671 """ 

3672 if self._unicode_names is None: 

3673 names = [] 

3674 for c in range(0,0x10FFFF + 1): 

3675 try: 

3676 names.append(unicodedata.name(chr(c))) 

3677 except ValueError: 

3678 pass 

3679 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES) 

3680 

3681 return self._unicode_names 

3682 

3683 

3684def _unicode_name_compute(ranges: list[tuple[int, int]]) -> list[str]: 

3685 names = [] 

3686 for start,stop in ranges: 

3687 for c in range(start, stop) : 

3688 try: 

3689 names.append(unicodedata.name(chr(c))) 

3690 except ValueError: 

3691 pass 

3692 return names