Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.11/site-packages/IPython/core/completer.py: 20%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1365 statements  

1"""Completion for IPython. 

2 

3This module started as fork of the rlcompleter module in the Python standard 

4library. The original enhancements made to rlcompleter have been sent 

5upstream and were accepted as of Python 2.3, 

6 

7This module now support a wide variety of completion mechanism both available 

8for normal classic Python code, as well as completer for IPython specific 

9Syntax like magics. 

10 

11Latex and Unicode completion 

12============================ 

13 

14IPython and compatible frontends not only can complete your code, but can help 

15you to input a wide range of characters. In particular we allow you to insert 

16a unicode character using the tab completion mechanism. 

17 

18Forward latex/unicode completion 

19-------------------------------- 

20 

21Forward completion allows you to easily type a unicode character using its latex 

22name, or unicode long description. To do so type a backslash follow by the 

23relevant name and press tab: 

24 

25 

26Using latex completion: 

27 

28.. code:: 

29 

30 \\alpha<tab> 

31 α 

32 

33or using unicode completion: 

34 

35 

36.. code:: 

37 

38 \\GREEK SMALL LETTER ALPHA<tab> 

39 α 

40 

41 

42Only valid Python identifiers will complete. Combining characters (like arrow or 

43dots) are also available, unlike latex they need to be put after the their 

44counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``. 

45 

46Some browsers are known to display combining characters incorrectly. 

47 

48Backward latex completion 

49------------------------- 

50 

51It is sometime challenging to know how to type a character, if you are using 

52IPython, or any compatible frontend you can prepend backslash to the character 

53and press :kbd:`Tab` to expand it to its latex form. 

54 

55.. code:: 

56 

57 \\α<tab> 

58 \\alpha 

59 

60 

61Both forward and backward completions can be deactivated by setting the 

62:std:configtrait:`Completer.backslash_combining_completions` option to 

63``False``. 

64 

65 

66Experimental 

67============ 

68 

69Starting with IPython 6.0, this module can make use of the Jedi library to 

70generate completions both using static analysis of the code, and dynamically 

71inspecting multiple namespaces. Jedi is an autocompletion and static analysis 

72for Python. The APIs attached to this new mechanism is unstable and will 

73raise unless use in an :any:`provisionalcompleter` context manager. 

74 

75You will find that the following are experimental: 

76 

77 - :any:`provisionalcompleter` 

78 - :any:`IPCompleter.completions` 

79 - :any:`Completion` 

80 - :any:`rectify_completions` 

81 

82.. note:: 

83 

84 better name for :any:`rectify_completions` ? 

85 

86We welcome any feedback on these new API, and we also encourage you to try this 

87module in debug mode (start IPython with ``--Completer.debug=True``) in order 

88to have extra logging information if :any:`jedi` is crashing, or if current 

89IPython completer pending deprecations are returning results not yet handled 

90by :any:`jedi` 

91 

92Using Jedi for tab completion allow snippets like the following to work without 

93having to execute any code: 

94 

95 >>> myvar = ['hello', 42] 

96 ... myvar[1].bi<tab> 

97 

98Tab completion will be able to infer that ``myvar[1]`` is a real number without 

99executing almost any code unlike the deprecated :any:`IPCompleter.greedy` 

100option. 

101 

102Be sure to update :any:`jedi` to the latest stable version or to try the 

103current development version to get better completions. 

104 

105Matchers 

106======== 

107 

108All completions routines are implemented using unified *Matchers* API. 

109The matchers API is provisional and subject to change without notice. 

110 

111The built-in matchers include: 

112 

113- :any:`IPCompleter.dict_key_matcher`: dictionary key completions, 

114- :any:`IPCompleter.magic_matcher`: completions for magics, 

115- :any:`IPCompleter.unicode_name_matcher`, 

116 :any:`IPCompleter.fwd_unicode_matcher` 

117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_, 

118- :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_, 

119- :any:`IPCompleter.file_matcher`: paths to files and directories, 

120- :any:`IPCompleter.python_func_kw_matcher` - function keywords, 

121- :any:`IPCompleter.python_matches` - globals and attributes (v1 API), 

122- ``IPCompleter.jedi_matcher`` - static analysis with Jedi, 

123- :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default 

124 implementation in :any:`InteractiveShell` which uses IPython hooks system 

125 (`complete_command`) with string dispatch (including regular expressions). 

126 Differently to other matchers, ``custom_completer_matcher`` will not suppress 

127 Jedi results to match behaviour in earlier IPython versions. 

128 

129Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list. 

130 

131Matcher API 

132----------- 

133 

134Simplifying some details, the ``Matcher`` interface can described as 

135 

136.. code-block:: 

137 

138 MatcherAPIv1 = Callable[[str], list[str]] 

139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult] 

140 

141 Matcher = MatcherAPIv1 | MatcherAPIv2 

142 

143The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0 

144and remains supported as a simplest way for generating completions. This is also 

145currently the only API supported by the IPython hooks system `complete_command`. 

146 

147To distinguish between matcher versions ``matcher_api_version`` attribute is used. 

148More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers, 

149and requires a literal ``2`` for v2 Matchers. 

150 

151Once the API stabilises future versions may relax the requirement for specifying 

152``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore 

153please do not rely on the presence of ``matcher_api_version`` for any purposes. 

154 

155Suppression of competing matchers 

156--------------------------------- 

157 

158By default results from all matchers are combined, in the order determined by 

159their priority. Matchers can request to suppress results from subsequent 

160matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``. 

161 

162When multiple matchers simultaneously request suppression, the results from of 

163the matcher with higher priority will be returned. 

164 

165Sometimes it is desirable to suppress most but not all other matchers; 

166this can be achieved by adding a set of identifiers of matchers which 

167should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key. 

168 

169The suppression behaviour can is user-configurable via 

170:std:configtrait:`IPCompleter.suppress_competing_matchers`. 

171""" 

172 

173 

174# Copyright (c) IPython Development Team. 

175# Distributed under the terms of the Modified BSD License. 

176# 

177# Some of this code originated from rlcompleter in the Python standard library 

178# Copyright (C) 2001 Python Software Foundation, www.python.org 

179 

180from __future__ import annotations 

181import builtins as builtin_mod 

182import enum 

183import glob 

184import inspect 

185import itertools 

186import keyword 

187import ast 

188import os 

189import re 

190import string 

191import sys 

192import tokenize 

193import time 

194import unicodedata 

195import uuid 

196import warnings 

197from ast import literal_eval 

198from collections import defaultdict 

199from contextlib import contextmanager 

200from dataclasses import dataclass 

201from functools import cached_property, partial 

202from types import SimpleNamespace 

203from typing import ( 

204 Iterable, 

205 Iterator, 

206 Union, 

207 Any, 

208 Sequence, 

209 Optional, 

210 TYPE_CHECKING, 

211 Sized, 

212 TypeVar, 

213 Literal, 

214) 

215 

216from IPython.core.guarded_eval import guarded_eval, EvaluationContext 

217from IPython.core.error import TryNext 

218from IPython.core.inputtransformer2 import ESC_MAGIC 

219from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol 

220from IPython.testing.skipdoctest import skip_doctest 

221from IPython.utils import generics 

222from IPython.utils.PyColorize import theme_table 

223from IPython.utils.decorators import sphinx_options 

224from IPython.utils.dir2 import dir2, get_real_method 

225from IPython.utils.docs import GENERATING_DOCUMENTATION 

226from IPython.utils.path import ensure_dir_exists 

227from IPython.utils.process import arg_split 

228from traitlets import ( 

229 Bool, 

230 Enum, 

231 Int, 

232 List as ListTrait, 

233 Unicode, 

234 Dict as DictTrait, 

235 DottedObjectName, 

236 Union as UnionTrait, 

237 observe, 

238) 

239from traitlets.config.configurable import Configurable 

240from traitlets.utils.importstring import import_item 

241 

242import __main__ 

243 

244from typing import cast 

245 

246if sys.version_info < (3, 12): 

247 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

248else: 

249 from typing import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

250 

251 

252# skip module docstests 

253__skip_doctest__ = True 

254 

255 

256try: 

257 import jedi 

258 jedi.settings.case_insensitive_completion = False 

259 import jedi.api.helpers 

260 import jedi.api.classes 

261 JEDI_INSTALLED = True 

262except ImportError: 

263 JEDI_INSTALLED = False 

264 

265 

266# ----------------------------------------------------------------------------- 

267# Globals 

268#----------------------------------------------------------------------------- 

269 

270# ranges where we have most of the valid unicode names. We could be more finer 

271# grained but is it worth it for performance While unicode have character in the 

272# range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I 

273# write this). With below range we cover them all, with a density of ~67% 

274# biggest next gap we consider only adds up about 1% density and there are 600 

275# gaps that would need hard coding. 

276_UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)] 

277 

278# Public API 

279__all__ = ["Completer", "IPCompleter"] 

280 

281if sys.platform == 'win32': 

282 PROTECTABLES = ' ' 

283else: 

284 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&' 

285 

286# Protect against returning an enormous number of completions which the frontend 

287# may have trouble processing. 

288MATCHES_LIMIT = 500 

289 

290# Completion type reported when no type can be inferred. 

291_UNKNOWN_TYPE = "<unknown>" 

292 

293# sentinel value to signal lack of a match 

294not_found = object() 

295 

296class ProvisionalCompleterWarning(FutureWarning): 

297 """ 

298 Exception raise by an experimental feature in this module. 

299 

300 Wrap code in :any:`provisionalcompleter` context manager if you 

301 are certain you want to use an unstable feature. 

302 """ 

303 pass 

304 

305warnings.filterwarnings('error', category=ProvisionalCompleterWarning) 

306 

307 

308@skip_doctest 

309@contextmanager 

310def provisionalcompleter(action='ignore'): 

311 """ 

312 This context manager has to be used in any place where unstable completer 

313 behavior and API may be called. 

314 

315 >>> with provisionalcompleter(): 

316 ... completer.do_experimental_things() # works 

317 

318 >>> completer.do_experimental_things() # raises. 

319 

320 .. note:: 

321 

322 Unstable 

323 

324 By using this context manager you agree that the API in use may change 

325 without warning, and that you won't complain if they do so. 

326 

327 You also understand that, if the API is not to your liking, you should report 

328 a bug to explain your use case upstream. 

329 

330 We'll be happy to get your feedback, feature requests, and improvements on 

331 any of the unstable APIs! 

332 """ 

333 with warnings.catch_warnings(): 

334 warnings.filterwarnings(action, category=ProvisionalCompleterWarning) 

335 yield 

336 

337 

338def has_open_quotes(s: str) -> Union[str, bool]: 

339 """Return whether a string has open quotes. 

340 

341 This simply counts whether the number of quote characters of either type in 

342 the string is odd. 

343 

344 Returns 

345 ------- 

346 If there is an open quote, the quote character is returned. Else, return 

347 False. 

348 """ 

349 # We check " first, then ', so complex cases with nested quotes will get 

350 # the " to take precedence. 

351 if s.count('"') % 2: 

352 return '"' 

353 elif s.count("'") % 2: 

354 return "'" 

355 else: 

356 return False 

357 

358 

359def protect_filename(s: str, protectables: str = PROTECTABLES) -> str: 

360 """Escape a string to protect certain characters.""" 

361 if set(s) & set(protectables): 

362 if sys.platform == "win32": 

363 return '"' + s + '"' 

364 else: 

365 return "".join(("\\" + c if c in protectables else c) for c in s) 

366 else: 

367 return s 

368 

369 

370def expand_user(path: str) -> tuple[str, bool, str]: 

371 """Expand ``~``-style usernames in strings. 

372 

373 This is similar to :func:`os.path.expanduser`, but it computes and returns 

374 extra information that will be useful if the input was being used in 

375 computing completions, and you wish to return the completions with the 

376 original '~' instead of its expanded value. 

377 

378 Parameters 

379 ---------- 

380 path : str 

381 String to be expanded. If no ~ is present, the output is the same as the 

382 input. 

383 

384 Returns 

385 ------- 

386 newpath : str 

387 Result of ~ expansion in the input path. 

388 tilde_expand : bool 

389 Whether any expansion was performed or not. 

390 tilde_val : str 

391 The value that ~ was replaced with. 

392 """ 

393 # Default values 

394 tilde_expand = False 

395 tilde_val = '' 

396 newpath = path 

397 

398 if path.startswith('~'): 

399 tilde_expand = True 

400 rest = len(path)-1 

401 newpath = os.path.expanduser(path) 

402 if rest: 

403 tilde_val = newpath[:-rest] 

404 else: 

405 tilde_val = newpath 

406 

407 return newpath, tilde_expand, tilde_val 

408 

409 

410def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str: 

411 """Does the opposite of expand_user, with its outputs. 

412 """ 

413 if tilde_expand: 

414 return path.replace(tilde_val, '~') 

415 else: 

416 return path 

417 

418 

419def completions_sorting_key(word): 

420 """key for sorting completions 

421 

422 This does several things: 

423 

424 - Demote any completions starting with underscores to the end 

425 - Insert any %magic and %%cellmagic completions in the alphabetical order 

426 by their name 

427 """ 

428 prio1, prio2 = 0, 0 

429 

430 if word.startswith('__'): 

431 prio1 = 2 

432 elif word.startswith('_'): 

433 prio1 = 1 

434 

435 if word.endswith('='): 

436 prio1 = -1 

437 

438 if word.startswith('%%'): 

439 # If there's another % in there, this is something else, so leave it alone 

440 if "%" not in word[2:]: 

441 word = word[2:] 

442 prio2 = 2 

443 elif word.startswith('%'): 

444 if "%" not in word[1:]: 

445 word = word[1:] 

446 prio2 = 1 

447 

448 return prio1, word, prio2 

449 

450 

451class _FakeJediCompletion: 

452 """ 

453 This is a workaround to communicate to the UI that Jedi has crashed and to 

454 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true. 

455 

456 Added in IPython 6.0 so should likely be removed for 7.0 

457 

458 """ 

459 

460 def __init__(self, name): 

461 

462 self.name = name 

463 self.complete = name 

464 self.type = 'crashed' 

465 self.name_with_symbols = name 

466 self.signature = "" 

467 self._origin = "fake" 

468 self.text = "crashed" 

469 

470 def __repr__(self): 

471 return '<Fake completion object jedi has crashed>' 

472 

473 

474_JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion] 

475 

476 

477class Completion: 

478 """ 

479 Completion object used and returned by IPython completers. 

480 

481 .. warning:: 

482 

483 Unstable 

484 

485 This function is unstable, API may change without warning. 

486 It will also raise unless use in proper context manager. 

487 

488 This act as a middle ground :any:`Completion` object between the 

489 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion 

490 object. While Jedi need a lot of information about evaluator and how the 

491 code should be ran/inspected, PromptToolkit (and other frontend) mostly 

492 need user facing information. 

493 

494 - Which range should be replaced replaced by what. 

495 - Some metadata (like completion type), or meta information to displayed to 

496 the use user. 

497 

498 For debugging purpose we can also store the origin of the completion (``jedi``, 

499 ``IPython.python_matches``, ``IPython.magics_matches``...). 

500 """ 

501 

502 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin'] 

503 

504 def __init__( 

505 self, 

506 start: int, 

507 end: int, 

508 text: str, 

509 *, 

510 type: Optional[str] = None, 

511 _origin="", 

512 signature="", 

513 ) -> None: 

514 warnings.warn( 

515 "``Completion`` is a provisional API (as of IPython 6.0). " 

516 "It may change without warnings. " 

517 "Use in corresponding context manager.", 

518 category=ProvisionalCompleterWarning, 

519 stacklevel=2, 

520 ) 

521 

522 self.start = start 

523 self.end = end 

524 self.text = text 

525 self.type = type 

526 self.signature = signature 

527 self._origin = _origin 

528 

529 def __repr__(self): 

530 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \ 

531 (self.start, self.end, self.text, self.type or '?', self.signature or '?') 

532 

533 def __eq__(self, other) -> bool: 

534 """ 

535 Equality and hash do not hash the type (as some completer may not be 

536 able to infer the type), but are use to (partially) de-duplicate 

537 completion. 

538 

539 Completely de-duplicating completion is a bit tricker that just 

540 comparing as it depends on surrounding text, which Completions are not 

541 aware of. 

542 """ 

543 return self.start == other.start and \ 

544 self.end == other.end and \ 

545 self.text == other.text 

546 

547 def __hash__(self): 

548 return hash((self.start, self.end, self.text)) 

549 

550 

551class SimpleCompletion: 

552 """Completion item to be included in the dictionary returned by new-style Matcher (API v2). 

553 

554 .. warning:: 

555 

556 Provisional 

557 

558 This class is used to describe the currently supported attributes of 

559 simple completion items, and any additional implementation details 

560 should not be relied on. Additional attributes may be included in 

561 future versions, and meaning of text disambiguated from the current 

562 dual meaning of "text to insert" and "text to used as a label". 

563 """ 

564 

565 __slots__ = ["text", "type"] 

566 

567 def __init__(self, text: str, *, type: Optional[str] = None): 

568 self.text = text 

569 self.type = type 

570 

571 def __repr__(self): 

572 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>" 

573 

574 

575class _MatcherResultBase(TypedDict): 

576 """Definition of dictionary to be returned by new-style Matcher (API v2).""" 

577 

578 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token. 

579 matched_fragment: NotRequired[str] 

580 

581 #: Whether to suppress results from all other matchers (True), some 

582 #: matchers (set of identifiers) or none (False); default is False. 

583 suppress: NotRequired[Union[bool, set[str]]] 

584 

585 #: Identifiers of matchers which should NOT be suppressed when this matcher 

586 #: requests to suppress all other matchers; defaults to an empty set. 

587 do_not_suppress: NotRequired[set[str]] 

588 

589 #: Are completions already ordered and should be left as-is? default is False. 

590 ordered: NotRequired[bool] 

591 

592 

593@sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"]) 

594class SimpleMatcherResult(_MatcherResultBase, TypedDict): 

595 """Result of new-style completion matcher.""" 

596 

597 # note: TypedDict is added again to the inheritance chain 

598 # in order to get __orig_bases__ for documentation 

599 

600 #: List of candidate completions 

601 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion] 

602 

603 

604class _JediMatcherResult(_MatcherResultBase): 

605 """Matching result returned by Jedi (will be processed differently)""" 

606 

607 #: list of candidate completions 

608 completions: Iterator[_JediCompletionLike] 

609 

610 

611AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion] 

612AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion) 

613 

614 

615@dataclass 

616class CompletionContext: 

617 """Completion context provided as an argument to matchers in the Matcher API v2.""" 

618 

619 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`) 

620 # which was not explicitly visible as an argument of the matcher, making any refactor 

621 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers 

622 # from the completer, and make substituting them in sub-classes easier. 

623 

624 #: Relevant fragment of code directly preceding the cursor. 

625 #: The extraction of token is implemented via splitter heuristic 

626 #: (following readline behaviour for legacy reasons), which is user configurable 

627 #: (by switching the greedy mode). 

628 token: str 

629 

630 #: The full available content of the editor or buffer 

631 full_text: str 

632 

633 #: Cursor position in the line (the same for ``full_text`` and ``text``). 

634 cursor_position: int 

635 

636 #: Cursor line in ``full_text``. 

637 cursor_line: int 

638 

639 #: The maximum number of completions that will be used downstream. 

640 #: Matchers can use this information to abort early. 

641 #: The built-in Jedi matcher is currently excepted from this limit. 

642 # If not given, return all possible completions. 

643 limit: Optional[int] 

644 

645 @cached_property 

646 def text_until_cursor(self) -> str: 

647 return self.line_with_cursor[: self.cursor_position] 

648 

649 @cached_property 

650 def line_with_cursor(self) -> str: 

651 return self.full_text.split("\n")[self.cursor_line] 

652 

653 

654#: Matcher results for API v2. 

655MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult] 

656 

657 

658class _MatcherAPIv1Base(Protocol): 

659 def __call__(self, text: str) -> list[str]: 

660 """Call signature.""" 

661 ... 

662 

663 #: Used to construct the default matcher identifier 

664 __qualname__: str 

665 

666 

667class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol): 

668 #: API version 

669 matcher_api_version: Optional[Literal[1]] 

670 

671 def __call__(self, text: str) -> list[str]: 

672 """Call signature.""" 

673 ... 

674 

675 

676#: Protocol describing Matcher API v1. 

677MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total] 

678 

679 

680class MatcherAPIv2(Protocol): 

681 """Protocol describing Matcher API v2.""" 

682 

683 #: API version 

684 matcher_api_version: Literal[2] = 2 

685 

686 def __call__(self, context: CompletionContext) -> MatcherResult: 

687 """Call signature.""" 

688 ... 

689 

690 #: Used to construct the default matcher identifier 

691 __qualname__: str 

692 

693 

694Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2] 

695 

696 

697def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]: 

698 api_version = _get_matcher_api_version(matcher) 

699 return api_version == 1 

700 

701 

702def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]: 

703 api_version = _get_matcher_api_version(matcher) 

704 return api_version == 2 

705 

706 

707def _is_sizable(value: Any) -> TypeGuard[Sized]: 

708 """Determines whether objects is sizable""" 

709 return hasattr(value, "__len__") 

710 

711 

712def _is_iterator(value: Any) -> TypeGuard[Iterator]: 

713 """Determines whether objects is sizable""" 

714 return hasattr(value, "__next__") 

715 

716 

717def has_any_completions(result: MatcherResult) -> bool: 

718 """Check if any result includes any completions.""" 

719 completions = result["completions"] 

720 if _is_sizable(completions): 

721 return len(completions) != 0 

722 if _is_iterator(completions): 

723 try: 

724 old_iterator = completions 

725 first = next(old_iterator) 

726 result["completions"] = cast( 

727 Iterator[SimpleCompletion], 

728 itertools.chain([first], old_iterator), 

729 ) 

730 return True 

731 except StopIteration: 

732 return False 

733 raise ValueError( 

734 "Completions returned by matcher need to be an Iterator or a Sizable" 

735 ) 

736 

737 

738def completion_matcher( 

739 *, 

740 priority: Optional[float] = None, 

741 identifier: Optional[str] = None, 

742 api_version: int = 1, 

743) -> Callable[[Matcher], Matcher]: 

744 """Adds attributes describing the matcher. 

745 

746 Parameters 

747 ---------- 

748 priority : Optional[float] 

749 The priority of the matcher, determines the order of execution of matchers. 

750 Higher priority means that the matcher will be executed first. Defaults to 0. 

751 identifier : Optional[str] 

752 identifier of the matcher allowing users to modify the behaviour via traitlets, 

753 and also used to for debugging (will be passed as ``origin`` with the completions). 

754 

755 Defaults to matcher function's ``__qualname__`` (for example, 

756 ``IPCompleter.file_matcher`` for the built-in matched defined 

757 as a ``file_matcher`` method of the ``IPCompleter`` class). 

758 api_version: Optional[int] 

759 version of the Matcher API used by this matcher. 

760 Currently supported values are 1 and 2. 

761 Defaults to 1. 

762 """ 

763 

764 def wrapper(func: Matcher): 

765 func.matcher_priority = priority or 0 # type: ignore 

766 func.matcher_identifier = identifier or func.__qualname__ # type: ignore 

767 func.matcher_api_version = api_version # type: ignore 

768 if TYPE_CHECKING: 

769 if api_version == 1: 

770 func = cast(MatcherAPIv1, func) 

771 elif api_version == 2: 

772 func = cast(MatcherAPIv2, func) 

773 return func 

774 

775 return wrapper 

776 

777 

778def _get_matcher_priority(matcher: Matcher): 

779 return getattr(matcher, "matcher_priority", 0) 

780 

781 

782def _get_matcher_id(matcher: Matcher): 

783 return getattr(matcher, "matcher_identifier", matcher.__qualname__) 

784 

785 

786def _get_matcher_api_version(matcher): 

787 return getattr(matcher, "matcher_api_version", 1) 

788 

789 

790context_matcher = partial(completion_matcher, api_version=2) 

791 

792 

793_IC = Iterable[Completion] 

794 

795 

796def _deduplicate_completions(text: str, completions: _IC)-> _IC: 

797 """ 

798 Deduplicate a set of completions. 

799 

800 .. warning:: 

801 

802 Unstable 

803 

804 This function is unstable, API may change without warning. 

805 

806 Parameters 

807 ---------- 

808 text : str 

809 text that should be completed. 

810 completions : Iterator[Completion] 

811 iterator over the completions to deduplicate 

812 

813 Yields 

814 ------ 

815 `Completions` objects 

816 Completions coming from multiple sources, may be different but end up having 

817 the same effect when applied to ``text``. If this is the case, this will 

818 consider completions as equal and only emit the first encountered. 

819 Not folded in `completions()` yet for debugging purpose, and to detect when 

820 the IPython completer does return things that Jedi does not, but should be 

821 at some point. 

822 """ 

823 completions = list(completions) 

824 if not completions: 

825 return 

826 

827 new_start = min(c.start for c in completions) 

828 new_end = max(c.end for c in completions) 

829 

830 seen = set() 

831 for c in completions: 

832 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

833 if new_text not in seen: 

834 yield c 

835 seen.add(new_text) 

836 

837 

838def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC: 

839 """ 

840 Rectify a set of completions to all have the same ``start`` and ``end`` 

841 

842 .. warning:: 

843 

844 Unstable 

845 

846 This function is unstable, API may change without warning. 

847 It will also raise unless use in proper context manager. 

848 

849 Parameters 

850 ---------- 

851 text : str 

852 text that should be completed. 

853 completions : Iterator[Completion] 

854 iterator over the completions to rectify 

855 _debug : bool 

856 Log failed completion 

857 

858 Notes 

859 ----- 

860 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though 

861 the Jupyter Protocol requires them to behave like so. This will readjust 

862 the completion to have the same ``start`` and ``end`` by padding both 

863 extremities with surrounding text. 

864 

865 During stabilisation should support a ``_debug`` option to log which 

866 completion are return by the IPython completer and not found in Jedi in 

867 order to make upstream bug report. 

868 """ 

869 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). " 

870 "It may change without warnings. " 

871 "Use in corresponding context manager.", 

872 category=ProvisionalCompleterWarning, stacklevel=2) 

873 

874 completions = list(completions) 

875 if not completions: 

876 return 

877 starts = (c.start for c in completions) 

878 ends = (c.end for c in completions) 

879 

880 new_start = min(starts) 

881 new_end = max(ends) 

882 

883 seen_jedi = set() 

884 seen_python_matches = set() 

885 for c in completions: 

886 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

887 if c._origin == 'jedi': 

888 seen_jedi.add(new_text) 

889 elif c._origin == "IPCompleter.python_matcher": 

890 seen_python_matches.add(new_text) 

891 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature) 

892 diff = seen_python_matches.difference(seen_jedi) 

893 if diff and _debug: 

894 print('IPython.python matches have extras:', diff) 

895 

896 

897if sys.platform == 'win32': 

898 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?' 

899else: 

900 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?' 

901 

902GREEDY_DELIMS = ' =\r\n' 

903 

904 

905class CompletionSplitter: 

906 """An object to split an input line in a manner similar to readline. 

907 

908 By having our own implementation, we can expose readline-like completion in 

909 a uniform manner to all frontends. This object only needs to be given the 

910 line of text to be split and the cursor position on said line, and it 

911 returns the 'word' to be completed on at the cursor after splitting the 

912 entire line. 

913 

914 What characters are used as splitting delimiters can be controlled by 

915 setting the ``delims`` attribute (this is a property that internally 

916 automatically builds the necessary regular expression)""" 

917 

918 # Private interface 

919 

920 # A string of delimiter characters. The default value makes sense for 

921 # IPython's most typical usage patterns. 

922 _delims = DELIMS 

923 

924 # The expression (a normal string) to be compiled into a regular expression 

925 # for actual splitting. We store it as an attribute mostly for ease of 

926 # debugging, since this type of code can be so tricky to debug. 

927 _delim_expr = None 

928 

929 # The regular expression that does the actual splitting 

930 _delim_re = None 

931 

932 def __init__(self, delims=None): 

933 delims = CompletionSplitter._delims if delims is None else delims 

934 self.delims = delims 

935 

936 @property 

937 def delims(self): 

938 """Return the string of delimiter characters.""" 

939 return self._delims 

940 

941 @delims.setter 

942 def delims(self, delims): 

943 """Set the delimiters for line splitting.""" 

944 expr = '[' + ''.join('\\'+ c for c in delims) + ']' 

945 self._delim_re = re.compile(expr) 

946 self._delims = delims 

947 self._delim_expr = expr 

948 

949 def split_line(self, line, cursor_pos=None): 

950 """Split a line of text with a cursor at the given position. 

951 """ 

952 cut_line = line if cursor_pos is None else line[:cursor_pos] 

953 return self._delim_re.split(cut_line)[-1] 

954 

955 

956 

957class Completer(Configurable): 

958 

959 greedy = Bool( 

960 False, 

961 help="""Activate greedy completion. 

962 

963 .. deprecated:: 8.8 

964 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead. 

965 

966 When enabled in IPython 8.8 or newer, changes configuration as follows: 

967 

968 - ``Completer.evaluation = 'unsafe'`` 

969 - ``Completer.auto_close_dict_keys = True`` 

970 """, 

971 ).tag(config=True) 

972 

973 evaluation = Enum( 

974 ("forbidden", "minimal", "limited", "unsafe", "dangerous"), 

975 default_value="limited", 

976 help="""Policy for code evaluation under completion. 

977 

978 Successive options allow to enable more eager evaluation for better 

979 completion suggestions, including for nested dictionaries, nested lists, 

980 or even results of function calls. 

981 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user 

982 code on :kbd:`Tab` with potentially unwanted or dangerous side effects. 

983 

984 Allowed values are: 

985 

986 - ``forbidden``: no evaluation of code is permitted, 

987 - ``minimal``: evaluation of literals and access to built-in namespace; 

988 no item/attribute evaluation, no access to locals/globals, 

989 no evaluation of any operations or comparisons. 

990 - ``limited``: access to all namespaces, evaluation of hard-coded methods 

991 (for example: :any:`dict.keys`, :any:`object.__getattr__`, 

992 :any:`object.__getitem__`) on allow-listed objects (for example: 

993 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``), 

994 - ``unsafe``: evaluation of all methods and function calls but not of 

995 syntax with side-effects like `del x`, 

996 - ``dangerous``: completely arbitrary evaluation; does not support auto-import. 

997 

998 To override specific elements of the policy, you can use ``policy_overrides`` trait. 

999 """, 

1000 ).tag(config=True) 

1001 

1002 use_jedi = Bool(default_value=JEDI_INSTALLED, 

1003 help="Experimental: Use Jedi to generate autocompletions. " 

1004 "Default to True if jedi is installed.").tag(config=True) 

1005 

1006 jedi_compute_type_timeout = Int(default_value=400, 

1007 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types. 

1008 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt 

1009 performance by preventing jedi to build its cache. 

1010 """).tag(config=True) 

1011 

1012 debug = Bool(default_value=False, 

1013 help='Enable debug for the Completer. Mostly print extra ' 

1014 'information for experimental jedi integration.')\ 

1015 .tag(config=True) 

1016 

1017 backslash_combining_completions = Bool(True, 

1018 help="Enable unicode completions, e.g. \\alpha<tab> . " 

1019 "Includes completion of latex commands, unicode names, and expanding " 

1020 "unicode characters back to latex commands.").tag(config=True) 

1021 

1022 auto_close_dict_keys = Bool( 

1023 False, 

1024 help=""" 

1025 Enable auto-closing dictionary keys. 

1026 

1027 When enabled string keys will be suffixed with a final quote 

1028 (matching the opening quote), tuple keys will also receive a 

1029 separating comma if needed, and keys which are final will 

1030 receive a closing bracket (``]``). 

1031 """, 

1032 ).tag(config=True) 

1033 

1034 policy_overrides = DictTrait( 

1035 default_value={}, 

1036 key_trait=Unicode(), 

1037 help="""Overrides for policy evaluation. 

1038 

1039 For example, to enable auto-import on completion specify: 

1040 

1041 .. code-block:: 

1042 

1043 ipython --Completer.policy_overrides='{"allow_auto_import": True}' --Completer.use_jedi=False 

1044 

1045 """, 

1046 ).tag(config=True) 

1047 

1048 auto_import_method = DottedObjectName( 

1049 default_value="importlib.import_module", 

1050 allow_none=True, 

1051 help="""\ 

1052 Provisional: 

1053 This is a provisional API in IPython 9.3, it may change without warnings. 

1054 

1055 A fully qualified path to an auto-import method for use by completer. 

1056 The function should take a single string and return `ModuleType` and 

1057 can raise `ImportError` exception if module is not found. 

1058 

1059 The default auto-import implementation does not populate the user namespace with the imported module. 

1060 """, 

1061 ).tag(config=True) 

1062 

1063 def __init__(self, namespace=None, global_namespace=None, **kwargs): 

1064 """Create a new completer for the command line. 

1065 

1066 Completer(namespace=ns, global_namespace=ns2) -> completer instance. 

1067 

1068 If unspecified, the default namespace where completions are performed 

1069 is __main__ (technically, __main__.__dict__). Namespaces should be 

1070 given as dictionaries. 

1071 

1072 An optional second namespace can be given. This allows the completer 

1073 to handle cases where both the local and global scopes need to be 

1074 distinguished. 

1075 """ 

1076 

1077 # Don't bind to namespace quite yet, but flag whether the user wants a 

1078 # specific namespace or to use __main__.__dict__. This will allow us 

1079 # to bind to __main__.__dict__ at completion time, not now. 

1080 if namespace is None: 

1081 self.use_main_ns = True 

1082 else: 

1083 self.use_main_ns = False 

1084 self.namespace = namespace 

1085 

1086 # The global namespace, if given, can be bound directly 

1087 if global_namespace is None: 

1088 self.global_namespace = {} 

1089 else: 

1090 self.global_namespace = global_namespace 

1091 

1092 self.custom_matchers = [] 

1093 

1094 super(Completer, self).__init__(**kwargs) 

1095 

1096 def complete(self, text, state): 

1097 """Return the next possible completion for 'text'. 

1098 

1099 This is called successively with state == 0, 1, 2, ... until it 

1100 returns None. The completion should begin with 'text'. 

1101 

1102 """ 

1103 if self.use_main_ns: 

1104 self.namespace = __main__.__dict__ 

1105 

1106 if state == 0: 

1107 if "." in text: 

1108 self.matches = self.attr_matches(text) 

1109 else: 

1110 self.matches = self.global_matches(text) 

1111 try: 

1112 return self.matches[state] 

1113 except IndexError: 

1114 return None 

1115 

1116 def global_matches(self, text): 

1117 """Compute matches when text is a simple name. 

1118 

1119 Return a list of all keywords, built-in functions and names currently 

1120 defined in self.namespace or self.global_namespace that match. 

1121 

1122 """ 

1123 matches = [] 

1124 match_append = matches.append 

1125 n = len(text) 

1126 for lst in [ 

1127 keyword.kwlist, 

1128 builtin_mod.__dict__.keys(), 

1129 list(self.namespace.keys()), 

1130 list(self.global_namespace.keys()), 

1131 ]: 

1132 for word in lst: 

1133 if word[:n] == text and word != "__builtins__": 

1134 match_append(word) 

1135 

1136 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z") 

1137 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]: 

1138 shortened = { 

1139 "_".join([sub[0] for sub in word.split("_")]): word 

1140 for word in lst 

1141 if snake_case_re.match(word) 

1142 } 

1143 for word in shortened.keys(): 

1144 if word[:n] == text and word != "__builtins__": 

1145 match_append(shortened[word]) 

1146 return matches 

1147 

1148 def attr_matches(self, text): 

1149 """Compute matches when text contains a dot. 

1150 

1151 Assuming the text is of the form NAME.NAME....[NAME], and is 

1152 evaluatable in self.namespace or self.global_namespace, it will be 

1153 evaluated and its attributes (as revealed by dir()) are used as 

1154 possible completions. (For class instances, class members are 

1155 also considered.) 

1156 

1157 WARNING: this can still invoke arbitrary C code, if an object 

1158 with a __getattr__ hook is evaluated. 

1159 

1160 """ 

1161 return self._attr_matches(text)[0] 

1162 

1163 # we simple attribute matching with normal identifiers. 

1164 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$") 

1165 

1166 def _strip_code_before_operator(self, code: str) -> str: 

1167 o_parens = {"(", "[", "{"} 

1168 c_parens = {")", "]", "}"} 

1169 

1170 # Dry-run tokenize to catch errors 

1171 try: 

1172 _ = list(tokenize.generate_tokens(iter(code.splitlines()).__next__)) 

1173 except tokenize.TokenError: 

1174 # Try trimming the expression and retrying 

1175 trimmed_code = self._trim_expr(code) 

1176 try: 

1177 _ = list( 

1178 tokenize.generate_tokens(iter(trimmed_code.splitlines()).__next__) 

1179 ) 

1180 code = trimmed_code 

1181 except tokenize.TokenError: 

1182 return code 

1183 

1184 tokens = _parse_tokens(code) 

1185 encountered_operator = False 

1186 after_operator = [] 

1187 nesting_level = 0 

1188 

1189 for t in tokens: 

1190 if t.type == tokenize.OP: 

1191 if t.string in o_parens: 

1192 nesting_level += 1 

1193 elif t.string in c_parens: 

1194 nesting_level -= 1 

1195 elif t.string != "." and nesting_level == 0: 

1196 encountered_operator = True 

1197 after_operator = [] 

1198 continue 

1199 

1200 if encountered_operator: 

1201 after_operator.append(t.string) 

1202 

1203 if encountered_operator: 

1204 return "".join(after_operator) 

1205 else: 

1206 return code 

1207 

1208 def _attr_matches( 

1209 self, text: str, include_prefix: bool = True 

1210 ) -> tuple[Sequence[str], str]: 

1211 m2 = self._ATTR_MATCH_RE.match(text) 

1212 if not m2: 

1213 return [], "" 

1214 expr, attr = m2.group(1, 2) 

1215 try: 

1216 expr = self._strip_code_before_operator(expr) 

1217 except tokenize.TokenError: 

1218 pass 

1219 

1220 obj = self._evaluate_expr(expr) 

1221 if obj is not_found: 

1222 return [], "" 

1223 

1224 if self.limit_to__all__ and hasattr(obj, '__all__'): 

1225 words = get__all__entries(obj) 

1226 else: 

1227 words = dir2(obj) 

1228 

1229 try: 

1230 words = generics.complete_object(obj, words) 

1231 except TryNext: 

1232 pass 

1233 except AssertionError: 

1234 raise 

1235 except Exception: 

1236 # Silence errors from completion function 

1237 pass 

1238 # Build match list to return 

1239 n = len(attr) 

1240 

1241 # Note: ideally we would just return words here and the prefix 

1242 # reconciliator would know that we intend to append to rather than 

1243 # replace the input text; this requires refactoring to return range 

1244 # which ought to be replaced (as does jedi). 

1245 if include_prefix: 

1246 tokens = _parse_tokens(expr) 

1247 rev_tokens = reversed(tokens) 

1248 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1249 name_turn = True 

1250 

1251 parts = [] 

1252 for token in rev_tokens: 

1253 if token.type in skip_over: 

1254 continue 

1255 if token.type == tokenize.NAME and name_turn: 

1256 parts.append(token.string) 

1257 name_turn = False 

1258 elif ( 

1259 token.type == tokenize.OP and token.string == "." and not name_turn 

1260 ): 

1261 parts.append(token.string) 

1262 name_turn = True 

1263 else: 

1264 # short-circuit if not empty nor name token 

1265 break 

1266 

1267 prefix_after_space = "".join(reversed(parts)) 

1268 else: 

1269 prefix_after_space = "" 

1270 

1271 return ( 

1272 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr], 

1273 "." + attr, 

1274 ) 

1275 

1276 def _trim_expr(self, code: str) -> str: 

1277 """ 

1278 Trim the code until it is a valid expression and not a tuple; 

1279 

1280 return the trimmed expression for guarded_eval. 

1281 """ 

1282 while code: 

1283 code = code[1:] 

1284 try: 

1285 res = ast.parse(code) 

1286 except SyntaxError: 

1287 continue 

1288 

1289 assert res is not None 

1290 if len(res.body) != 1: 

1291 continue 

1292 expr = res.body[0].value 

1293 if isinstance(expr, ast.Tuple) and not code[-1] == ")": 

1294 # we skip implicit tuple, like when trimming `fun(a,b`<completion> 

1295 # as `a,b` would be a tuple, and we actually expect to get only `b` 

1296 continue 

1297 return code 

1298 return "" 

1299 

1300 def _evaluate_expr(self, expr): 

1301 obj = not_found 

1302 done = False 

1303 while not done and expr: 

1304 try: 

1305 obj = guarded_eval( 

1306 expr, 

1307 EvaluationContext( 

1308 globals=self.global_namespace, 

1309 locals=self.namespace, 

1310 evaluation=self.evaluation, 

1311 auto_import=self._auto_import, 

1312 policy_overrides=self.policy_overrides, 

1313 ), 

1314 ) 

1315 done = True 

1316 except Exception as e: 

1317 if self.debug: 

1318 print("Evaluation exception", e) 

1319 # trim the expression to remove any invalid prefix 

1320 # e.g. user starts `(d[`, so we get `expr = '(d'`, 

1321 # where parenthesis is not closed. 

1322 # TODO: make this faster by reusing parts of the computation? 

1323 expr = self._trim_expr(expr) 

1324 return obj 

1325 

1326 @property 

1327 def _auto_import(self): 

1328 if self.auto_import_method is None: 

1329 return None 

1330 if not hasattr(self, "_auto_import_func"): 

1331 self._auto_import_func = import_item(self.auto_import_method) 

1332 return self._auto_import_func 

1333 

1334def get__all__entries(obj): 

1335 """returns the strings in the __all__ attribute""" 

1336 try: 

1337 words = getattr(obj, '__all__') 

1338 except Exception: 

1339 return [] 

1340 

1341 return [w for w in words if isinstance(w, str)] 

1342 

1343 

1344class _DictKeyState(enum.Flag): 

1345 """Represent state of the key match in context of other possible matches. 

1346 

1347 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple. 

1348 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`. 

1349 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added. 

1350 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}` 

1351 """ 

1352 

1353 BASELINE = 0 

1354 END_OF_ITEM = enum.auto() 

1355 END_OF_TUPLE = enum.auto() 

1356 IN_TUPLE = enum.auto() 

1357 

1358 

1359def _parse_tokens(c): 

1360 """Parse tokens even if there is an error.""" 

1361 tokens = [] 

1362 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__) 

1363 while True: 

1364 try: 

1365 tokens.append(next(token_generator)) 

1366 except tokenize.TokenError: 

1367 return tokens 

1368 except StopIteration: 

1369 return tokens 

1370 

1371 

1372def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]: 

1373 """Match any valid Python numeric literal in a prefix of dictionary keys. 

1374 

1375 References: 

1376 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals 

1377 - https://docs.python.org/3/library/tokenize.html 

1378 """ 

1379 if prefix[-1].isspace(): 

1380 # if user typed a space we do not have anything to complete 

1381 # even if there was a valid number token before 

1382 return None 

1383 tokens = _parse_tokens(prefix) 

1384 rev_tokens = reversed(tokens) 

1385 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1386 number = None 

1387 for token in rev_tokens: 

1388 if token.type in skip_over: 

1389 continue 

1390 if number is None: 

1391 if token.type == tokenize.NUMBER: 

1392 number = token.string 

1393 continue 

1394 else: 

1395 # we did not match a number 

1396 return None 

1397 if token.type == tokenize.OP: 

1398 if token.string == ",": 

1399 break 

1400 if token.string in {"+", "-"}: 

1401 number = token.string + number 

1402 else: 

1403 return None 

1404 return number 

1405 

1406 

1407_INT_FORMATS = { 

1408 "0b": bin, 

1409 "0o": oct, 

1410 "0x": hex, 

1411} 

1412 

1413 

1414def match_dict_keys( 

1415 keys: list[Union[str, bytes, tuple[Union[str, bytes], ...]]], 

1416 prefix: str, 

1417 delims: str, 

1418 extra_prefix: Optional[tuple[Union[str, bytes], ...]] = None, 

1419) -> tuple[str, int, dict[str, _DictKeyState]]: 

1420 """Used by dict_key_matches, matching the prefix to a list of keys 

1421 

1422 Parameters 

1423 ---------- 

1424 keys 

1425 list of keys in dictionary currently being completed. 

1426 prefix 

1427 Part of the text already typed by the user. E.g. `mydict[b'fo` 

1428 delims 

1429 String of delimiters to consider when finding the current key. 

1430 extra_prefix : optional 

1431 Part of the text already typed in multi-key index cases. E.g. for 

1432 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`. 

1433 

1434 Returns 

1435 ------- 

1436 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with 

1437 ``quote`` being the quote that need to be used to close current string. 

1438 ``token_start`` the position where the replacement should start occurring, 

1439 ``matches`` a dictionary of replacement/completion keys on keys and values 

1440 indicating whether the state. 

1441 """ 

1442 prefix_tuple = extra_prefix if extra_prefix else () 

1443 

1444 prefix_tuple_size = sum( 

1445 [ 

1446 # for pandas, do not count slices as taking space 

1447 not isinstance(k, slice) 

1448 for k in prefix_tuple 

1449 ] 

1450 ) 

1451 text_serializable_types = (str, bytes, int, float, slice) 

1452 

1453 def filter_prefix_tuple(key): 

1454 # Reject too short keys 

1455 if len(key) <= prefix_tuple_size: 

1456 return False 

1457 # Reject keys which cannot be serialised to text 

1458 for k in key: 

1459 if not isinstance(k, text_serializable_types): 

1460 return False 

1461 # Reject keys that do not match the prefix 

1462 for k, pt in zip(key, prefix_tuple): 

1463 if k != pt and not isinstance(pt, slice): 

1464 return False 

1465 # All checks passed! 

1466 return True 

1467 

1468 filtered_key_is_final: dict[Union[str, bytes, int, float], _DictKeyState] = ( 

1469 defaultdict(lambda: _DictKeyState.BASELINE) 

1470 ) 

1471 

1472 for k in keys: 

1473 # If at least one of the matches is not final, mark as undetermined. 

1474 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where 

1475 # `111` appears final on first match but is not final on the second. 

1476 

1477 if isinstance(k, tuple): 

1478 if filter_prefix_tuple(k): 

1479 key_fragment = k[prefix_tuple_size] 

1480 filtered_key_is_final[key_fragment] |= ( 

1481 _DictKeyState.END_OF_TUPLE 

1482 if len(k) == prefix_tuple_size + 1 

1483 else _DictKeyState.IN_TUPLE 

1484 ) 

1485 elif prefix_tuple_size > 0: 

1486 # we are completing a tuple but this key is not a tuple, 

1487 # so we should ignore it 

1488 pass 

1489 else: 

1490 if isinstance(k, text_serializable_types): 

1491 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM 

1492 

1493 filtered_keys = filtered_key_is_final.keys() 

1494 

1495 if not prefix: 

1496 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()} 

1497 

1498 quote_match = re.search("(?:\"|')", prefix) 

1499 is_user_prefix_numeric = False 

1500 

1501 if quote_match: 

1502 quote = quote_match.group() 

1503 valid_prefix = prefix + quote 

1504 try: 

1505 prefix_str = literal_eval(valid_prefix) 

1506 except Exception: 

1507 return "", 0, {} 

1508 else: 

1509 # If it does not look like a string, let's assume 

1510 # we are dealing with a number or variable. 

1511 number_match = _match_number_in_dict_key_prefix(prefix) 

1512 

1513 # We do not want the key matcher to suggest variable names so we yield: 

1514 if number_match is None: 

1515 # The alternative would be to assume that user forgort the quote 

1516 # and if the substring matches, suggest adding it at the start. 

1517 return "", 0, {} 

1518 

1519 prefix_str = number_match 

1520 is_user_prefix_numeric = True 

1521 quote = "" 

1522 

1523 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$' 

1524 token_match = re.search(pattern, prefix, re.UNICODE) 

1525 assert token_match is not None # silence mypy 

1526 token_start = token_match.start() 

1527 token_prefix = token_match.group() 

1528 

1529 matched: dict[str, _DictKeyState] = {} 

1530 

1531 str_key: Union[str, bytes] 

1532 

1533 for key in filtered_keys: 

1534 if isinstance(key, (int, float)): 

1535 # User typed a number but this key is not a number. 

1536 if not is_user_prefix_numeric: 

1537 continue 

1538 str_key = str(key) 

1539 if isinstance(key, int): 

1540 int_base = prefix_str[:2].lower() 

1541 # if user typed integer using binary/oct/hex notation: 

1542 if int_base in _INT_FORMATS: 

1543 int_format = _INT_FORMATS[int_base] 

1544 str_key = int_format(key) 

1545 else: 

1546 # User typed a string but this key is a number. 

1547 if is_user_prefix_numeric: 

1548 continue 

1549 str_key = key 

1550 try: 

1551 if not str_key.startswith(prefix_str): 

1552 continue 

1553 except (AttributeError, TypeError, UnicodeError): 

1554 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa 

1555 continue 

1556 

1557 # reformat remainder of key to begin with prefix 

1558 rem = str_key[len(prefix_str) :] 

1559 # force repr wrapped in ' 

1560 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"') 

1561 rem_repr = rem_repr[1 + rem_repr.index("'"):-2] 

1562 if quote == '"': 

1563 # The entered prefix is quoted with ", 

1564 # but the match is quoted with '. 

1565 # A contained " hence needs escaping for comparison: 

1566 rem_repr = rem_repr.replace('"', '\\"') 

1567 

1568 # then reinsert prefix from start of token 

1569 match = "%s%s" % (token_prefix, rem_repr) 

1570 

1571 matched[match] = filtered_key_is_final[key] 

1572 return quote, token_start, matched 

1573 

1574 

1575def cursor_to_position(text:str, line:int, column:int)->int: 

1576 """ 

1577 Convert the (line,column) position of the cursor in text to an offset in a 

1578 string. 

1579 

1580 Parameters 

1581 ---------- 

1582 text : str 

1583 The text in which to calculate the cursor offset 

1584 line : int 

1585 Line of the cursor; 0-indexed 

1586 column : int 

1587 Column of the cursor 0-indexed 

1588 

1589 Returns 

1590 ------- 

1591 Position of the cursor in ``text``, 0-indexed. 

1592 

1593 See Also 

1594 -------- 

1595 position_to_cursor : reciprocal of this function 

1596 

1597 """ 

1598 lines = text.split('\n') 

1599 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines))) 

1600 

1601 return sum(len(line) + 1 for line in lines[:line]) + column 

1602 

1603 

1604def position_to_cursor(text: str, offset: int) -> tuple[int, int]: 

1605 """ 

1606 Convert the position of the cursor in text (0 indexed) to a line 

1607 number(0-indexed) and a column number (0-indexed) pair 

1608 

1609 Position should be a valid position in ``text``. 

1610 

1611 Parameters 

1612 ---------- 

1613 text : str 

1614 The text in which to calculate the cursor offset 

1615 offset : int 

1616 Position of the cursor in ``text``, 0-indexed. 

1617 

1618 Returns 

1619 ------- 

1620 (line, column) : (int, int) 

1621 Line of the cursor; 0-indexed, column of the cursor 0-indexed 

1622 

1623 See Also 

1624 -------- 

1625 cursor_to_position : reciprocal of this function 

1626 

1627 """ 

1628 

1629 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text)) 

1630 

1631 before = text[:offset] 

1632 blines = before.split('\n') # ! splitnes trim trailing \n 

1633 line = before.count('\n') 

1634 col = len(blines[-1]) 

1635 return line, col 

1636 

1637 

1638def _safe_isinstance(obj, module, class_name, *attrs): 

1639 """Checks if obj is an instance of module.class_name if loaded 

1640 """ 

1641 if module in sys.modules: 

1642 m = sys.modules[module] 

1643 for attr in [class_name, *attrs]: 

1644 m = getattr(m, attr) 

1645 return isinstance(obj, m) 

1646 

1647 

1648@context_matcher() 

1649def back_unicode_name_matcher(context: CompletionContext): 

1650 """Match Unicode characters back to Unicode name 

1651 

1652 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API. 

1653 """ 

1654 fragment, matches = back_unicode_name_matches(context.text_until_cursor) 

1655 return _convert_matcher_v1_result_to_v2( 

1656 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

1657 ) 

1658 

1659 

1660def back_unicode_name_matches(text: str) -> tuple[str, Sequence[str]]: 

1661 """Match Unicode characters back to Unicode name 

1662 

1663 This does ``☃`` -> ``\\snowman`` 

1664 

1665 Note that snowman is not a valid python3 combining character but will be expanded. 

1666 Though it will not recombine back to the snowman character by the completion machinery. 

1667 

1668 This will not either back-complete standard sequences like \\n, \\b ... 

1669 

1670 .. deprecated:: 8.6 

1671 You can use :meth:`back_unicode_name_matcher` instead. 

1672 

1673 Returns 

1674 ======= 

1675 

1676 Return a tuple with two elements: 

1677 

1678 - The Unicode character that was matched (preceded with a backslash), or 

1679 empty string, 

1680 - a sequence (of 1), name for the match Unicode character, preceded by 

1681 backslash, or empty if no match. 

1682 """ 

1683 if len(text)<2: 

1684 return '', () 

1685 maybe_slash = text[-2] 

1686 if maybe_slash != '\\': 

1687 return '', () 

1688 

1689 char = text[-1] 

1690 # no expand on quote for completion in strings. 

1691 # nor backcomplete standard ascii keys 

1692 if char in string.ascii_letters or char in ('"',"'"): 

1693 return '', () 

1694 try : 

1695 unic = unicodedata.name(char) 

1696 return '\\'+char,('\\'+unic,) 

1697 except KeyError: 

1698 pass 

1699 return '', () 

1700 

1701 

1702@context_matcher() 

1703def back_latex_name_matcher(context: CompletionContext) -> SimpleMatcherResult: 

1704 """Match latex characters back to unicode name 

1705 

1706 This does ``\\ℵ`` -> ``\\aleph`` 

1707 """ 

1708 

1709 text = context.text_until_cursor 

1710 no_match = { 

1711 "completions": [], 

1712 "suppress": False, 

1713 } 

1714 

1715 if len(text)<2: 

1716 return no_match 

1717 maybe_slash = text[-2] 

1718 if maybe_slash != '\\': 

1719 return no_match 

1720 

1721 char = text[-1] 

1722 # no expand on quote for completion in strings. 

1723 # nor backcomplete standard ascii keys 

1724 if char in string.ascii_letters or char in ('"',"'"): 

1725 return no_match 

1726 try : 

1727 latex = reverse_latex_symbol[char] 

1728 # '\\' replace the \ as well 

1729 return { 

1730 "completions": [SimpleCompletion(text=latex, type="latex")], 

1731 "suppress": True, 

1732 "matched_fragment": "\\" + char, 

1733 } 

1734 except KeyError: 

1735 pass 

1736 

1737 return no_match 

1738 

1739def _formatparamchildren(parameter) -> str: 

1740 """ 

1741 Get parameter name and value from Jedi Private API 

1742 

1743 Jedi does not expose a simple way to get `param=value` from its API. 

1744 

1745 Parameters 

1746 ---------- 

1747 parameter 

1748 Jedi's function `Param` 

1749 

1750 Returns 

1751 ------- 

1752 A string like 'a', 'b=1', '*args', '**kwargs' 

1753 

1754 """ 

1755 description = parameter.description 

1756 if not description.startswith('param '): 

1757 raise ValueError('Jedi function parameter description have change format.' 

1758 'Expected "param ...", found %r".' % description) 

1759 return description[6:] 

1760 

1761def _make_signature(completion)-> str: 

1762 """ 

1763 Make the signature from a jedi completion 

1764 

1765 Parameters 

1766 ---------- 

1767 completion : jedi.Completion 

1768 object does not complete a function type 

1769 

1770 Returns 

1771 ------- 

1772 a string consisting of the function signature, with the parenthesis but 

1773 without the function name. example: 

1774 `(a, *args, b=1, **kwargs)` 

1775 

1776 """ 

1777 

1778 # it looks like this might work on jedi 0.17 

1779 if hasattr(completion, 'get_signatures'): 

1780 signatures = completion.get_signatures() 

1781 if not signatures: 

1782 return '(?)' 

1783 

1784 c0 = completion.get_signatures()[0] 

1785 return '('+c0.to_string().split('(', maxsplit=1)[1] 

1786 

1787 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures() 

1788 for p in signature.defined_names()) if f]) 

1789 

1790 

1791_CompleteResult = dict[str, MatcherResult] 

1792 

1793 

1794DICT_MATCHER_REGEX = re.compile( 

1795 r"""(?x) 

1796( # match dict-referring - or any get item object - expression 

1797 .+ 

1798) 

1799\[ # open bracket 

1800\s* # and optional whitespace 

1801# Capture any number of serializable objects (e.g. "a", "b", 'c') 

1802# and slices 

1803((?:(?: 

1804 (?: # closed string 

1805 [uUbB]? # string prefix (r not handled) 

1806 (?: 

1807 '(?:[^']|(?<!\\)\\')*' 

1808 | 

1809 "(?:[^"]|(?<!\\)\\")*" 

1810 ) 

1811 ) 

1812 | 

1813 # capture integers and slices 

1814 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2} 

1815 | 

1816 # integer in bin/hex/oct notation 

1817 0[bBxXoO]_?(?:\w|\d)+ 

1818 ) 

1819 \s*,\s* 

1820)*) 

1821((?: 

1822 (?: # unclosed string 

1823 [uUbB]? # string prefix (r not handled) 

1824 (?: 

1825 '(?:[^']|(?<!\\)\\')* 

1826 | 

1827 "(?:[^"]|(?<!\\)\\")* 

1828 ) 

1829 ) 

1830 | 

1831 # unfinished integer 

1832 (?:[-+]?\d+) 

1833 | 

1834 # integer in bin/hex/oct notation 

1835 0[bBxXoO]_?(?:\w|\d)+ 

1836 ) 

1837)? 

1838$ 

1839""" 

1840) 

1841 

1842 

1843def _convert_matcher_v1_result_to_v2_no_no( 

1844 matches: Sequence[str], 

1845 type: str, 

1846) -> SimpleMatcherResult: 

1847 """same as _convert_matcher_v1_result_to_v2 but fragment=None, and suppress_if_matches is False by construction""" 

1848 return SimpleMatcherResult( 

1849 completions=[SimpleCompletion(text=match, type=type) for match in matches], 

1850 suppress=False, 

1851 ) 

1852 

1853 

1854def _convert_matcher_v1_result_to_v2( 

1855 matches: Sequence[str], 

1856 type: str, 

1857 fragment: Optional[str] = None, 

1858 suppress_if_matches: bool = False, 

1859) -> SimpleMatcherResult: 

1860 """Utility to help with transition""" 

1861 result = { 

1862 "completions": [SimpleCompletion(text=match, type=type) for match in matches], 

1863 "suppress": (True if matches else False) if suppress_if_matches else False, 

1864 } 

1865 if fragment is not None: 

1866 result["matched_fragment"] = fragment 

1867 return cast(SimpleMatcherResult, result) 

1868 

1869 

1870class IPCompleter(Completer): 

1871 """Extension of the completer class with IPython-specific features""" 

1872 

1873 @observe('greedy') 

1874 def _greedy_changed(self, change): 

1875 """update the splitter and readline delims when greedy is changed""" 

1876 if change["new"]: 

1877 self.evaluation = "unsafe" 

1878 self.auto_close_dict_keys = True 

1879 self.splitter.delims = GREEDY_DELIMS 

1880 else: 

1881 self.evaluation = "limited" 

1882 self.auto_close_dict_keys = False 

1883 self.splitter.delims = DELIMS 

1884 

1885 dict_keys_only = Bool( 

1886 False, 

1887 help=""" 

1888 Whether to show dict key matches only. 

1889 

1890 (disables all matchers except for `IPCompleter.dict_key_matcher`). 

1891 """, 

1892 ) 

1893 

1894 suppress_competing_matchers = UnionTrait( 

1895 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))], 

1896 default_value=None, 

1897 help=""" 

1898 Whether to suppress completions from other *Matchers*. 

1899 

1900 When set to ``None`` (default) the matchers will attempt to auto-detect 

1901 whether suppression of other matchers is desirable. For example, at 

1902 the beginning of a line followed by `%` we expect a magic completion 

1903 to be the only applicable option, and after ``my_dict['`` we usually 

1904 expect a completion with an existing dictionary key. 

1905 

1906 If you want to disable this heuristic and see completions from all matchers, 

1907 set ``IPCompleter.suppress_competing_matchers = False``. 

1908 To disable the heuristic for specific matchers provide a dictionary mapping: 

1909 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``. 

1910 

1911 Set ``IPCompleter.suppress_competing_matchers = True`` to limit 

1912 completions to the set of matchers with the highest priority; 

1913 this is equivalent to ``IPCompleter.merge_completions`` and 

1914 can be beneficial for performance, but will sometimes omit relevant 

1915 candidates from matchers further down the priority list. 

1916 """, 

1917 ).tag(config=True) 

1918 

1919 merge_completions = Bool( 

1920 True, 

1921 help="""Whether to merge completion results into a single list 

1922 

1923 If False, only the completion results from the first non-empty 

1924 completer will be returned. 

1925 

1926 As of version 8.6.0, setting the value to ``False`` is an alias for: 

1927 ``IPCompleter.suppress_competing_matchers = True.``. 

1928 """, 

1929 ).tag(config=True) 

1930 

1931 disable_matchers = ListTrait( 

1932 Unicode(), 

1933 help="""List of matchers to disable. 

1934 

1935 The list should contain matcher identifiers (see :any:`completion_matcher`). 

1936 """, 

1937 ).tag(config=True) 

1938 

1939 omit__names = Enum( 

1940 (0, 1, 2), 

1941 default_value=2, 

1942 help="""Instruct the completer to omit private method names 

1943 

1944 Specifically, when completing on ``object.<tab>``. 

1945 

1946 When 2 [default]: all names that start with '_' will be excluded. 

1947 

1948 When 1: all 'magic' names (``__foo__``) will be excluded. 

1949 

1950 When 0: nothing will be excluded. 

1951 """ 

1952 ).tag(config=True) 

1953 limit_to__all__ = Bool(False, 

1954 help=""" 

1955 DEPRECATED as of version 5.0. 

1956 

1957 Instruct the completer to use __all__ for the completion 

1958 

1959 Specifically, when completing on ``object.<tab>``. 

1960 

1961 When True: only those names in obj.__all__ will be included. 

1962 

1963 When False [default]: the __all__ attribute is ignored 

1964 """, 

1965 ).tag(config=True) 

1966 

1967 profile_completions = Bool( 

1968 default_value=False, 

1969 help="If True, emit profiling data for completion subsystem using cProfile." 

1970 ).tag(config=True) 

1971 

1972 profiler_output_dir = Unicode( 

1973 default_value=".completion_profiles", 

1974 help="Template for path at which to output profile data for completions." 

1975 ).tag(config=True) 

1976 

1977 @observe('limit_to__all__') 

1978 def _limit_to_all_changed(self, change): 

1979 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration ' 

1980 'value has been deprecated since IPython 5.0, will be made to have ' 

1981 'no effects and then removed in future version of IPython.', 

1982 UserWarning) 

1983 

1984 def __init__( 

1985 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs 

1986 ): 

1987 """IPCompleter() -> completer 

1988 

1989 Return a completer object. 

1990 

1991 Parameters 

1992 ---------- 

1993 shell 

1994 a pointer to the ipython shell itself. This is needed 

1995 because this completer knows about magic functions, and those can 

1996 only be accessed via the ipython instance. 

1997 namespace : dict, optional 

1998 an optional dict where completions are performed. 

1999 global_namespace : dict, optional 

2000 secondary optional dict for completions, to 

2001 handle cases (such as IPython embedded inside functions) where 

2002 both Python scopes are visible. 

2003 config : Config 

2004 traitlet's config object 

2005 **kwargs 

2006 passed to super class unmodified. 

2007 """ 

2008 

2009 self.magic_escape = ESC_MAGIC 

2010 self.splitter = CompletionSplitter() 

2011 

2012 # _greedy_changed() depends on splitter and readline being defined: 

2013 super().__init__( 

2014 namespace=namespace, 

2015 global_namespace=global_namespace, 

2016 config=config, 

2017 **kwargs, 

2018 ) 

2019 

2020 # List where completion matches will be stored 

2021 self.matches = [] 

2022 self.shell = shell 

2023 # Regexp to split filenames with spaces in them 

2024 self.space_name_re = re.compile(r'([^\\] )') 

2025 # Hold a local ref. to glob.glob for speed 

2026 self.glob = glob.glob 

2027 

2028 # Determine if we are running on 'dumb' terminals, like (X)Emacs 

2029 # buffers, to avoid completion problems. 

2030 term = os.environ.get('TERM','xterm') 

2031 self.dumb_terminal = term in ['dumb','emacs'] 

2032 

2033 # Special handling of backslashes needed in win32 platforms 

2034 if sys.platform == "win32": 

2035 self.clean_glob = self._clean_glob_win32 

2036 else: 

2037 self.clean_glob = self._clean_glob 

2038 

2039 #regexp to parse docstring for function signature 

2040 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2041 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2042 #use this if positional argument name is also needed 

2043 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)') 

2044 

2045 self.magic_arg_matchers = [ 

2046 self.magic_config_matcher, 

2047 self.magic_color_matcher, 

2048 ] 

2049 

2050 # This is set externally by InteractiveShell 

2051 self.custom_completers = None 

2052 

2053 # This is a list of names of unicode characters that can be completed 

2054 # into their corresponding unicode value. The list is large, so we 

2055 # lazily initialize it on first use. Consuming code should access this 

2056 # attribute through the `@unicode_names` property. 

2057 self._unicode_names = None 

2058 

2059 self._backslash_combining_matchers = [ 

2060 self.latex_name_matcher, 

2061 self.unicode_name_matcher, 

2062 back_latex_name_matcher, 

2063 back_unicode_name_matcher, 

2064 self.fwd_unicode_matcher, 

2065 ] 

2066 

2067 if not self.backslash_combining_completions: 

2068 for matcher in self._backslash_combining_matchers: 

2069 self.disable_matchers.append(_get_matcher_id(matcher)) 

2070 

2071 if not self.merge_completions: 

2072 self.suppress_competing_matchers = True 

2073 

2074 @property 

2075 def matchers(self) -> list[Matcher]: 

2076 """All active matcher routines for completion""" 

2077 if self.dict_keys_only: 

2078 return [self.dict_key_matcher] 

2079 

2080 if self.use_jedi: 

2081 return [ 

2082 *self.custom_matchers, 

2083 *self._backslash_combining_matchers, 

2084 *self.magic_arg_matchers, 

2085 self.custom_completer_matcher, 

2086 self.magic_matcher, 

2087 self._jedi_matcher, 

2088 self.dict_key_matcher, 

2089 self.file_matcher, 

2090 ] 

2091 else: 

2092 return [ 

2093 *self.custom_matchers, 

2094 *self._backslash_combining_matchers, 

2095 *self.magic_arg_matchers, 

2096 self.custom_completer_matcher, 

2097 self.dict_key_matcher, 

2098 self.magic_matcher, 

2099 self.python_matcher, 

2100 self.file_matcher, 

2101 self.python_func_kw_matcher, 

2102 ] 

2103 

2104 def all_completions(self, text: str) -> list[str]: 

2105 """ 

2106 Wrapper around the completion methods for the benefit of emacs. 

2107 """ 

2108 prefix = text.rpartition('.')[0] 

2109 with provisionalcompleter(): 

2110 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text 

2111 for c in self.completions(text, len(text))] 

2112 

2113 return self.complete(text)[1] 

2114 

2115 def _clean_glob(self, text:str): 

2116 return self.glob("%s*" % text) 

2117 

2118 def _clean_glob_win32(self, text:str): 

2119 return [f.replace("\\","/") 

2120 for f in self.glob("%s*" % text)] 

2121 

2122 @context_matcher() 

2123 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2124 """Match filenames, expanding ~USER type strings. 

2125 

2126 Most of the seemingly convoluted logic in this completer is an 

2127 attempt to handle filenames with spaces in them. And yet it's not 

2128 quite perfect, because Python's readline doesn't expose all of the 

2129 GNU readline details needed for this to be done correctly. 

2130 

2131 For a filename with a space in it, the printed completions will be 

2132 only the parts after what's already been typed (instead of the 

2133 full completions, as is normally done). I don't think with the 

2134 current (as of Python 2.3) Python readline it's possible to do 

2135 better. 

2136 """ 

2137 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter, 

2138 # starts with `/home/`, `C:\`, etc) 

2139 

2140 text = context.token 

2141 

2142 # chars that require escaping with backslash - i.e. chars 

2143 # that readline treats incorrectly as delimiters, but we 

2144 # don't want to treat as delimiters in filename matching 

2145 # when escaped with backslash 

2146 if text.startswith('!'): 

2147 text = text[1:] 

2148 text_prefix = u'!' 

2149 else: 

2150 text_prefix = u'' 

2151 

2152 text_until_cursor = self.text_until_cursor 

2153 # track strings with open quotes 

2154 open_quotes = has_open_quotes(text_until_cursor) 

2155 

2156 if '(' in text_until_cursor or '[' in text_until_cursor: 

2157 lsplit = text 

2158 else: 

2159 try: 

2160 # arg_split ~ shlex.split, but with unicode bugs fixed by us 

2161 lsplit = arg_split(text_until_cursor)[-1] 

2162 except ValueError: 

2163 # typically an unmatched ", or backslash without escaped char. 

2164 if open_quotes: 

2165 lsplit = text_until_cursor.split(open_quotes)[-1] 

2166 else: 

2167 return { 

2168 "completions": [], 

2169 "suppress": False, 

2170 } 

2171 except IndexError: 

2172 # tab pressed on empty line 

2173 lsplit = "" 

2174 

2175 if not open_quotes and lsplit != protect_filename(lsplit): 

2176 # if protectables are found, do matching on the whole escaped name 

2177 has_protectables = True 

2178 text0,text = text,lsplit 

2179 else: 

2180 has_protectables = False 

2181 text = os.path.expanduser(text) 

2182 

2183 if text == "": 

2184 return { 

2185 "completions": [ 

2186 SimpleCompletion( 

2187 text=text_prefix + protect_filename(f), type="path" 

2188 ) 

2189 for f in self.glob("*") 

2190 ], 

2191 "suppress": False, 

2192 } 

2193 

2194 # Compute the matches from the filesystem 

2195 if sys.platform == 'win32': 

2196 m0 = self.clean_glob(text) 

2197 else: 

2198 m0 = self.clean_glob(text.replace('\\', '')) 

2199 

2200 if has_protectables: 

2201 # If we had protectables, we need to revert our changes to the 

2202 # beginning of filename so that we don't double-write the part 

2203 # of the filename we have so far 

2204 len_lsplit = len(lsplit) 

2205 matches = [text_prefix + text0 + 

2206 protect_filename(f[len_lsplit:]) for f in m0] 

2207 else: 

2208 if open_quotes: 

2209 # if we have a string with an open quote, we don't need to 

2210 # protect the names beyond the quote (and we _shouldn't_, as 

2211 # it would cause bugs when the filesystem call is made). 

2212 matches = m0 if sys.platform == "win32" else\ 

2213 [protect_filename(f, open_quotes) for f in m0] 

2214 else: 

2215 matches = [text_prefix + 

2216 protect_filename(f) for f in m0] 

2217 

2218 # Mark directories in input list by appending '/' to their names. 

2219 return { 

2220 "completions": [ 

2221 SimpleCompletion(text=x + "/" if os.path.isdir(x) else x, type="path") 

2222 for x in matches 

2223 ], 

2224 "suppress": False, 

2225 } 

2226 

2227 @context_matcher() 

2228 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2229 """Match magics.""" 

2230 

2231 # Get all shell magics now rather than statically, so magics loaded at 

2232 # runtime show up too. 

2233 text = context.token 

2234 lsm = self.shell.magics_manager.lsmagic() 

2235 line_magics = lsm['line'] 

2236 cell_magics = lsm['cell'] 

2237 pre = self.magic_escape 

2238 pre2 = pre+pre 

2239 

2240 explicit_magic = text.startswith(pre) 

2241 

2242 # Completion logic: 

2243 # - user gives %%: only do cell magics 

2244 # - user gives %: do both line and cell magics 

2245 # - no prefix: do both 

2246 # In other words, line magics are skipped if the user gives %% explicitly 

2247 # 

2248 # We also exclude magics that match any currently visible names: 

2249 # https://github.com/ipython/ipython/issues/4877, unless the user has 

2250 # typed a %: 

2251 # https://github.com/ipython/ipython/issues/10754 

2252 bare_text = text.lstrip(pre) 

2253 global_matches = self.global_matches(bare_text) 

2254 if not explicit_magic: 

2255 def matches(magic): 

2256 """ 

2257 Filter magics, in particular remove magics that match 

2258 a name present in global namespace. 

2259 """ 

2260 return ( magic.startswith(bare_text) and 

2261 magic not in global_matches ) 

2262 else: 

2263 def matches(magic): 

2264 return magic.startswith(bare_text) 

2265 

2266 completions = [pre2 + m for m in cell_magics if matches(m)] 

2267 if not text.startswith(pre2): 

2268 completions += [pre + m for m in line_magics if matches(m)] 

2269 

2270 is_magic_prefix = len(text) > 0 and text[0] == "%" 

2271 

2272 return { 

2273 "completions": [ 

2274 SimpleCompletion(text=comp, type="magic") for comp in completions 

2275 ], 

2276 "suppress": is_magic_prefix and len(completions) > 0, 

2277 } 

2278 

2279 @context_matcher() 

2280 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2281 """Match class names and attributes for %config magic.""" 

2282 # NOTE: uses `line_buffer` equivalent for compatibility 

2283 matches = self.magic_config_matches(context.line_with_cursor) 

2284 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2285 

2286 def magic_config_matches(self, text: str) -> list[str]: 

2287 """Match class names and attributes for %config magic. 

2288 

2289 .. deprecated:: 8.6 

2290 You can use :meth:`magic_config_matcher` instead. 

2291 """ 

2292 texts = text.strip().split() 

2293 

2294 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'): 

2295 # get all configuration classes 

2296 classes = sorted(set([ c for c in self.shell.configurables 

2297 if c.__class__.class_traits(config=True) 

2298 ]), key=lambda x: x.__class__.__name__) 

2299 classnames = [ c.__class__.__name__ for c in classes ] 

2300 

2301 # return all classnames if config or %config is given 

2302 if len(texts) == 1: 

2303 return classnames 

2304 

2305 # match classname 

2306 classname_texts = texts[1].split('.') 

2307 classname = classname_texts[0] 

2308 classname_matches = [ c for c in classnames 

2309 if c.startswith(classname) ] 

2310 

2311 # return matched classes or the matched class with attributes 

2312 if texts[1].find('.') < 0: 

2313 return classname_matches 

2314 elif len(classname_matches) == 1 and \ 

2315 classname_matches[0] == classname: 

2316 cls = classes[classnames.index(classname)].__class__ 

2317 help = cls.class_get_help() 

2318 # strip leading '--' from cl-args: 

2319 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help) 

2320 return [ attr.split('=')[0] 

2321 for attr in help.strip().splitlines() 

2322 if attr.startswith(texts[1]) ] 

2323 return [] 

2324 

2325 @context_matcher() 

2326 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2327 """Match color schemes for %colors magic.""" 

2328 text = context.line_with_cursor 

2329 texts = text.split() 

2330 if text.endswith(' '): 

2331 # .split() strips off the trailing whitespace. Add '' back 

2332 # so that: '%colors ' -> ['%colors', ''] 

2333 texts.append('') 

2334 

2335 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'): 

2336 prefix = texts[1] 

2337 return SimpleMatcherResult( 

2338 completions=[ 

2339 SimpleCompletion(color, type="param") 

2340 for color in theme_table.keys() 

2341 if color.startswith(prefix) 

2342 ], 

2343 suppress=False, 

2344 ) 

2345 return SimpleMatcherResult( 

2346 completions=[], 

2347 suppress=False, 

2348 ) 

2349 

2350 @context_matcher(identifier="IPCompleter.jedi_matcher") 

2351 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult: 

2352 matches = self._jedi_matches( 

2353 cursor_column=context.cursor_position, 

2354 cursor_line=context.cursor_line, 

2355 text=context.full_text, 

2356 ) 

2357 return { 

2358 "completions": matches, 

2359 # static analysis should not suppress other matchers 

2360 "suppress": False, 

2361 } 

2362 

2363 def _jedi_matches( 

2364 self, cursor_column: int, cursor_line: int, text: str 

2365 ) -> Iterator[_JediCompletionLike]: 

2366 """ 

2367 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and 

2368 cursor position. 

2369 

2370 Parameters 

2371 ---------- 

2372 cursor_column : int 

2373 column position of the cursor in ``text``, 0-indexed. 

2374 cursor_line : int 

2375 line position of the cursor in ``text``, 0-indexed 

2376 text : str 

2377 text to complete 

2378 

2379 Notes 

2380 ----- 

2381 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion` 

2382 object containing a string with the Jedi debug information attached. 

2383 

2384 .. deprecated:: 8.6 

2385 You can use :meth:`_jedi_matcher` instead. 

2386 """ 

2387 namespaces = [self.namespace] 

2388 if self.global_namespace is not None: 

2389 namespaces.append(self.global_namespace) 

2390 

2391 completion_filter = lambda x:x 

2392 offset = cursor_to_position(text, cursor_line, cursor_column) 

2393 # filter output if we are completing for object members 

2394 if offset: 

2395 pre = text[offset-1] 

2396 if pre == '.': 

2397 if self.omit__names == 2: 

2398 completion_filter = lambda c:not c.name.startswith('_') 

2399 elif self.omit__names == 1: 

2400 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__')) 

2401 elif self.omit__names == 0: 

2402 completion_filter = lambda x:x 

2403 else: 

2404 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names)) 

2405 

2406 interpreter = jedi.Interpreter(text[:offset], namespaces) 

2407 try_jedi = True 

2408 

2409 try: 

2410 # find the first token in the current tree -- if it is a ' or " then we are in a string 

2411 completing_string = False 

2412 try: 

2413 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value')) 

2414 except StopIteration: 

2415 pass 

2416 else: 

2417 # note the value may be ', ", or it may also be ''' or """, or 

2418 # in some cases, """what/you/typed..., but all of these are 

2419 # strings. 

2420 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'} 

2421 

2422 # if we are in a string jedi is likely not the right candidate for 

2423 # now. Skip it. 

2424 try_jedi = not completing_string 

2425 except Exception as e: 

2426 # many of things can go wrong, we are using private API just don't crash. 

2427 if self.debug: 

2428 print("Error detecting if completing a non-finished string :", e, '|') 

2429 

2430 if not try_jedi: 

2431 return iter([]) 

2432 try: 

2433 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1)) 

2434 except Exception as e: 

2435 if self.debug: 

2436 return iter( 

2437 [ 

2438 _FakeJediCompletion( 

2439 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' 

2440 % (e) 

2441 ) 

2442 ] 

2443 ) 

2444 else: 

2445 return iter([]) 

2446 

2447 class _CompletionContextType(enum.Enum): 

2448 ATTRIBUTE = "attribute" # For attribute completion 

2449 GLOBAL = "global" # For global completion 

2450 

2451 def _determine_completion_context(self, line): 

2452 """ 

2453 Determine whether the cursor is in an attribute or global completion context. 

2454 """ 

2455 # Cursor in string/comment → GLOBAL. 

2456 is_string, is_in_expression = self._is_in_string_or_comment(line) 

2457 if is_string and not is_in_expression: 

2458 return self._CompletionContextType.GLOBAL 

2459 

2460 # If we're in a template string expression, handle specially 

2461 if is_string and is_in_expression: 

2462 # Extract the expression part - look for the last { that isn't closed 

2463 expr_start = line.rfind("{") 

2464 if expr_start >= 0: 

2465 # We're looking at the expression inside a template string 

2466 expr = line[expr_start + 1 :] 

2467 # Recursively determine the context of the expression 

2468 return self._determine_completion_context(expr) 

2469 

2470 # Handle plain number literals - should be global context 

2471 # Ex: 3. -42.14 but not 3.1. 

2472 if re.search(r"(?<!\w)(?<!\d\.)([-+]?\d+\.(\d+)?)(?!\w)$", line): 

2473 return self._CompletionContextType.GLOBAL 

2474 

2475 # Handle all other attribute matches np.ran, d[0].k, (a,b).count 

2476 chain_match = re.search(r".*(.+\.(?:[a-zA-Z]\w*)?)$", line) 

2477 if chain_match: 

2478 return self._CompletionContextType.ATTRIBUTE 

2479 

2480 return self._CompletionContextType.GLOBAL 

2481 

2482 def _is_in_string_or_comment(self, text): 

2483 """ 

2484 Determine if the cursor is inside a string or comment. 

2485 Returns (is_string, is_in_expression) tuple: 

2486 - is_string: True if in any kind of string 

2487 - is_in_expression: True if inside an f-string/t-string expression 

2488 """ 

2489 in_single_quote = False 

2490 in_double_quote = False 

2491 in_triple_single = False 

2492 in_triple_double = False 

2493 in_template_string = False # Covers both f-strings and t-strings 

2494 in_expression = False # For expressions in f/t-strings 

2495 expression_depth = 0 # Track nested braces in expressions 

2496 i = 0 

2497 

2498 while i < len(text): 

2499 # Check for f-string or t-string start 

2500 if ( 

2501 i + 1 < len(text) 

2502 and text[i] in ("f", "t") 

2503 and (text[i + 1] == '"' or text[i + 1] == "'") 

2504 and not ( 

2505 in_single_quote 

2506 or in_double_quote 

2507 or in_triple_single 

2508 or in_triple_double 

2509 ) 

2510 ): 

2511 in_template_string = True 

2512 i += 1 # Skip the 'f' or 't' 

2513 

2514 # Handle triple quotes 

2515 if i + 2 < len(text): 

2516 if ( 

2517 text[i : i + 3] == '"""' 

2518 and not in_single_quote 

2519 and not in_triple_single 

2520 ): 

2521 in_triple_double = not in_triple_double 

2522 if not in_triple_double: 

2523 in_template_string = False 

2524 i += 3 

2525 continue 

2526 if ( 

2527 text[i : i + 3] == "'''" 

2528 and not in_double_quote 

2529 and not in_triple_double 

2530 ): 

2531 in_triple_single = not in_triple_single 

2532 if not in_triple_single: 

2533 in_template_string = False 

2534 i += 3 

2535 continue 

2536 

2537 # Handle escapes 

2538 if text[i] == "\\" and i + 1 < len(text): 

2539 i += 2 

2540 continue 

2541 

2542 # Handle nested braces within f-strings 

2543 if in_template_string: 

2544 # Special handling for consecutive opening braces 

2545 if i + 1 < len(text) and text[i : i + 2] == "{{": 

2546 i += 2 

2547 continue 

2548 

2549 # Detect start of an expression 

2550 if text[i] == "{": 

2551 # Only increment depth and mark as expression if not already in an expression 

2552 # or if we're at a top-level nested brace 

2553 if not in_expression or (in_expression and expression_depth == 0): 

2554 in_expression = True 

2555 expression_depth += 1 

2556 i += 1 

2557 continue 

2558 

2559 # Detect end of an expression 

2560 if text[i] == "}": 

2561 expression_depth -= 1 

2562 if expression_depth <= 0: 

2563 in_expression = False 

2564 expression_depth = 0 

2565 i += 1 

2566 continue 

2567 

2568 in_triple_quote = in_triple_single or in_triple_double 

2569 

2570 # Handle quotes - also reset template string when closing quotes are encountered 

2571 if text[i] == '"' and not in_single_quote and not in_triple_quote: 

2572 in_double_quote = not in_double_quote 

2573 if not in_double_quote and not in_triple_quote: 

2574 in_template_string = False 

2575 elif text[i] == "'" and not in_double_quote and not in_triple_quote: 

2576 in_single_quote = not in_single_quote 

2577 if not in_single_quote and not in_triple_quote: 

2578 in_template_string = False 

2579 

2580 # Check for comment 

2581 if text[i] == "#" and not ( 

2582 in_single_quote or in_double_quote or in_triple_quote 

2583 ): 

2584 return True, False 

2585 

2586 i += 1 

2587 

2588 is_string = ( 

2589 in_single_quote or in_double_quote or in_triple_single or in_triple_double 

2590 ) 

2591 

2592 # Return tuple (is_string, is_in_expression) 

2593 return ( 

2594 is_string or (in_template_string and not in_expression), 

2595 in_expression and expression_depth > 0, 

2596 ) 

2597 

2598 @context_matcher() 

2599 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2600 """Match attributes or global python names""" 

2601 text = context.text_until_cursor 

2602 completion_type = self._determine_completion_context(text) 

2603 if completion_type == self._CompletionContextType.ATTRIBUTE: 

2604 try: 

2605 matches, fragment = self._attr_matches(text, include_prefix=False) 

2606 if text.endswith(".") and self.omit__names: 

2607 if self.omit__names == 1: 

2608 # true if txt is _not_ a __ name, false otherwise: 

2609 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None 

2610 else: 

2611 # true if txt is _not_ a _ name, false otherwise: 

2612 no__name = ( 

2613 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :]) 

2614 is None 

2615 ) 

2616 matches = filter(no__name, matches) 

2617 return _convert_matcher_v1_result_to_v2( 

2618 matches, type="attribute", fragment=fragment 

2619 ) 

2620 except NameError: 

2621 # catches <undefined attributes>.<tab> 

2622 return SimpleMatcherResult(completions=[], suppress=False) 

2623 else: 

2624 matches = self.global_matches(context.token) 

2625 # TODO: maybe distinguish between functions, modules and just "variables" 

2626 return SimpleMatcherResult( 

2627 completions=[ 

2628 SimpleCompletion(text=match, type="variable") for match in matches 

2629 ], 

2630 suppress=False, 

2631 ) 

2632 

2633 @completion_matcher(api_version=1) 

2634 def python_matches(self, text: str) -> Iterable[str]: 

2635 """Match attributes or global python names. 

2636 

2637 .. deprecated:: 8.27 

2638 You can use :meth:`python_matcher` instead.""" 

2639 if "." in text: 

2640 try: 

2641 matches = self.attr_matches(text) 

2642 if text.endswith('.') and self.omit__names: 

2643 if self.omit__names == 1: 

2644 # true if txt is _not_ a __ name, false otherwise: 

2645 no__name = (lambda txt: 

2646 re.match(r'.*\.__.*?__',txt) is None) 

2647 else: 

2648 # true if txt is _not_ a _ name, false otherwise: 

2649 no__name = (lambda txt: 

2650 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None) 

2651 matches = filter(no__name, matches) 

2652 except NameError: 

2653 # catches <undefined attributes>.<tab> 

2654 matches = [] 

2655 else: 

2656 matches = self.global_matches(text) 

2657 return matches 

2658 

2659 def _default_arguments_from_docstring(self, doc): 

2660 """Parse the first line of docstring for call signature. 

2661 

2662 Docstring should be of the form 'min(iterable[, key=func])\n'. 

2663 It can also parse cython docstring of the form 

2664 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'. 

2665 """ 

2666 if doc is None: 

2667 return [] 

2668 

2669 #care only the firstline 

2670 line = doc.lstrip().splitlines()[0] 

2671 

2672 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2673 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]' 

2674 sig = self.docstring_sig_re.search(line) 

2675 if sig is None: 

2676 return [] 

2677 # iterable[, key=func]' -> ['iterable[' ,' key=func]'] 

2678 sig = sig.groups()[0].split(',') 

2679 ret = [] 

2680 for s in sig: 

2681 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2682 ret += self.docstring_kwd_re.findall(s) 

2683 return ret 

2684 

2685 def _default_arguments(self, obj): 

2686 """Return the list of default arguments of obj if it is callable, 

2687 or empty list otherwise.""" 

2688 call_obj = obj 

2689 ret = [] 

2690 if inspect.isbuiltin(obj): 

2691 pass 

2692 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)): 

2693 if inspect.isclass(obj): 

2694 #for cython embedsignature=True the constructor docstring 

2695 #belongs to the object itself not __init__ 

2696 ret += self._default_arguments_from_docstring( 

2697 getattr(obj, '__doc__', '')) 

2698 # for classes, check for __init__,__new__ 

2699 call_obj = (getattr(obj, '__init__', None) or 

2700 getattr(obj, '__new__', None)) 

2701 # for all others, check if they are __call__able 

2702 elif hasattr(obj, '__call__'): 

2703 call_obj = obj.__call__ 

2704 ret += self._default_arguments_from_docstring( 

2705 getattr(call_obj, '__doc__', '')) 

2706 

2707 _keeps = (inspect.Parameter.KEYWORD_ONLY, 

2708 inspect.Parameter.POSITIONAL_OR_KEYWORD) 

2709 

2710 try: 

2711 sig = inspect.signature(obj) 

2712 ret.extend(k for k, v in sig.parameters.items() if 

2713 v.kind in _keeps) 

2714 except ValueError: 

2715 pass 

2716 

2717 return list(set(ret)) 

2718 

2719 @context_matcher() 

2720 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2721 """Match named parameters (kwargs) of the last open function.""" 

2722 matches = self.python_func_kw_matches(context.token) 

2723 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2724 

2725 def python_func_kw_matches(self, text): 

2726 """Match named parameters (kwargs) of the last open function. 

2727 

2728 .. deprecated:: 8.6 

2729 You can use :meth:`python_func_kw_matcher` instead. 

2730 """ 

2731 

2732 if "." in text: # a parameter cannot be dotted 

2733 return [] 

2734 try: regexp = self.__funcParamsRegex 

2735 except AttributeError: 

2736 regexp = self.__funcParamsRegex = re.compile(r''' 

2737 '.*?(?<!\\)' | # single quoted strings or 

2738 ".*?(?<!\\)" | # double quoted strings or 

2739 \w+ | # identifier 

2740 \S # other characters 

2741 ''', re.VERBOSE | re.DOTALL) 

2742 # 1. find the nearest identifier that comes before an unclosed 

2743 # parenthesis before the cursor 

2744 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo" 

2745 tokens = regexp.findall(self.text_until_cursor) 

2746 iterTokens = reversed(tokens) 

2747 openPar = 0 

2748 

2749 for token in iterTokens: 

2750 if token == ')': 

2751 openPar -= 1 

2752 elif token == '(': 

2753 openPar += 1 

2754 if openPar > 0: 

2755 # found the last unclosed parenthesis 

2756 break 

2757 else: 

2758 return [] 

2759 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" ) 

2760 ids = [] 

2761 isId = re.compile(r'\w+$').match 

2762 

2763 while True: 

2764 try: 

2765 ids.append(next(iterTokens)) 

2766 if not isId(ids[-1]): 

2767 ids.pop() 

2768 break 

2769 if not next(iterTokens) == '.': 

2770 break 

2771 except StopIteration: 

2772 break 

2773 

2774 # Find all named arguments already assigned to, as to avoid suggesting 

2775 # them again 

2776 usedNamedArgs = set() 

2777 par_level = -1 

2778 for token, next_token in zip(tokens, tokens[1:]): 

2779 if token == '(': 

2780 par_level += 1 

2781 elif token == ')': 

2782 par_level -= 1 

2783 

2784 if par_level != 0: 

2785 continue 

2786 

2787 if next_token != '=': 

2788 continue 

2789 

2790 usedNamedArgs.add(token) 

2791 

2792 argMatches = [] 

2793 try: 

2794 callableObj = '.'.join(ids[::-1]) 

2795 namedArgs = self._default_arguments(eval(callableObj, 

2796 self.namespace)) 

2797 

2798 # Remove used named arguments from the list, no need to show twice 

2799 for namedArg in set(namedArgs) - usedNamedArgs: 

2800 if namedArg.startswith(text): 

2801 argMatches.append("%s=" %namedArg) 

2802 except: 

2803 pass 

2804 

2805 return argMatches 

2806 

2807 @staticmethod 

2808 def _get_keys(obj: Any) -> list[Any]: 

2809 # Objects can define their own completions by defining an 

2810 # _ipy_key_completions_() method. 

2811 method = get_real_method(obj, '_ipython_key_completions_') 

2812 if method is not None: 

2813 return method() 

2814 

2815 # Special case some common in-memory dict-like types 

2816 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"): 

2817 try: 

2818 return list(obj.keys()) 

2819 except Exception: 

2820 return [] 

2821 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"): 

2822 try: 

2823 return list(obj.obj.keys()) 

2824 except Exception: 

2825 return [] 

2826 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\ 

2827 _safe_isinstance(obj, 'numpy', 'void'): 

2828 return obj.dtype.names or [] 

2829 return [] 

2830 

2831 @context_matcher() 

2832 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2833 """Match string keys in a dictionary, after e.g. ``foo[``.""" 

2834 matches = self.dict_key_matches(context.token) 

2835 return _convert_matcher_v1_result_to_v2( 

2836 matches, type="dict key", suppress_if_matches=True 

2837 ) 

2838 

2839 def dict_key_matches(self, text: str) -> list[str]: 

2840 """Match string keys in a dictionary, after e.g. ``foo[``. 

2841 

2842 .. deprecated:: 8.6 

2843 You can use :meth:`dict_key_matcher` instead. 

2844 """ 

2845 

2846 # Short-circuit on closed dictionary (regular expression would 

2847 # not match anyway, but would take quite a while). 

2848 if self.text_until_cursor.strip().endswith("]"): 

2849 return [] 

2850 

2851 match = DICT_MATCHER_REGEX.search(self.text_until_cursor) 

2852 

2853 if match is None: 

2854 return [] 

2855 

2856 expr, prior_tuple_keys, key_prefix = match.groups() 

2857 

2858 obj = self._evaluate_expr(expr) 

2859 

2860 if obj is not_found: 

2861 return [] 

2862 

2863 keys = self._get_keys(obj) 

2864 if not keys: 

2865 return keys 

2866 

2867 tuple_prefix = guarded_eval( 

2868 prior_tuple_keys, 

2869 EvaluationContext( 

2870 globals=self.global_namespace, 

2871 locals=self.namespace, 

2872 evaluation=self.evaluation, # type: ignore 

2873 in_subscript=True, 

2874 auto_import=self._auto_import, 

2875 policy_overrides=self.policy_overrides, 

2876 ), 

2877 ) 

2878 

2879 closing_quote, token_offset, matches = match_dict_keys( 

2880 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix 

2881 ) 

2882 if not matches: 

2883 return [] 

2884 

2885 # get the cursor position of 

2886 # - the text being completed 

2887 # - the start of the key text 

2888 # - the start of the completion 

2889 text_start = len(self.text_until_cursor) - len(text) 

2890 if key_prefix: 

2891 key_start = match.start(3) 

2892 completion_start = key_start + token_offset 

2893 else: 

2894 key_start = completion_start = match.end() 

2895 

2896 # grab the leading prefix, to make sure all completions start with `text` 

2897 if text_start > key_start: 

2898 leading = '' 

2899 else: 

2900 leading = text[text_start:completion_start] 

2901 

2902 # append closing quote and bracket as appropriate 

2903 # this is *not* appropriate if the opening quote or bracket is outside 

2904 # the text given to this method, e.g. `d["""a\nt 

2905 can_close_quote = False 

2906 can_close_bracket = False 

2907 

2908 continuation = self.line_buffer[len(self.text_until_cursor) :].strip() 

2909 

2910 if continuation.startswith(closing_quote): 

2911 # do not close if already closed, e.g. `d['a<tab>'` 

2912 continuation = continuation[len(closing_quote) :] 

2913 else: 

2914 can_close_quote = True 

2915 

2916 continuation = continuation.strip() 

2917 

2918 # e.g. `pandas.DataFrame` has different tuple indexer behaviour, 

2919 # handling it is out of scope, so let's avoid appending suffixes. 

2920 has_known_tuple_handling = isinstance(obj, dict) 

2921 

2922 can_close_bracket = ( 

2923 not continuation.startswith("]") and self.auto_close_dict_keys 

2924 ) 

2925 can_close_tuple_item = ( 

2926 not continuation.startswith(",") 

2927 and has_known_tuple_handling 

2928 and self.auto_close_dict_keys 

2929 ) 

2930 can_close_quote = can_close_quote and self.auto_close_dict_keys 

2931 

2932 # fast path if closing quote should be appended but not suffix is allowed 

2933 if not can_close_quote and not can_close_bracket and closing_quote: 

2934 return [leading + k for k in matches] 

2935 

2936 results = [] 

2937 

2938 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM 

2939 

2940 for k, state_flag in matches.items(): 

2941 result = leading + k 

2942 if can_close_quote and closing_quote: 

2943 result += closing_quote 

2944 

2945 if state_flag == end_of_tuple_or_item: 

2946 # We do not know which suffix to add, 

2947 # e.g. both tuple item and string 

2948 # match this item. 

2949 pass 

2950 

2951 if state_flag in end_of_tuple_or_item and can_close_bracket: 

2952 result += "]" 

2953 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item: 

2954 result += ", " 

2955 results.append(result) 

2956 return results 

2957 

2958 @context_matcher() 

2959 def unicode_name_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2960 """Match Latex-like syntax for unicode characters base 

2961 on the name of the character. 

2962 

2963 This does ``\\GREEK SMALL LETTER ETA`` -> ``η`` 

2964 

2965 Works only on valid python 3 identifier, or on combining characters that 

2966 will combine to form a valid identifier. 

2967 """ 

2968 

2969 text = context.text_until_cursor 

2970 

2971 slashpos = text.rfind('\\') 

2972 if slashpos > -1: 

2973 s = text[slashpos+1:] 

2974 try : 

2975 unic = unicodedata.lookup(s) 

2976 # allow combining chars 

2977 if ('a'+unic).isidentifier(): 

2978 return { 

2979 "completions": [SimpleCompletion(text=unic, type="unicode")], 

2980 "suppress": True, 

2981 "matched_fragment": "\\" + s, 

2982 } 

2983 except KeyError: 

2984 pass 

2985 return { 

2986 "completions": [], 

2987 "suppress": False, 

2988 } 

2989 

2990 @context_matcher() 

2991 def latex_name_matcher(self, context: CompletionContext): 

2992 """Match Latex syntax for unicode characters. 

2993 

2994 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

2995 """ 

2996 fragment, matches = self.latex_matches(context.text_until_cursor) 

2997 return _convert_matcher_v1_result_to_v2( 

2998 matches, type="latex", fragment=fragment, suppress_if_matches=True 

2999 ) 

3000 

3001 def latex_matches(self, text: str) -> tuple[str, Sequence[str]]: 

3002 """Match Latex syntax for unicode characters. 

3003 

3004 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3005 

3006 .. deprecated:: 8.6 

3007 You can use :meth:`latex_name_matcher` instead. 

3008 """ 

3009 slashpos = text.rfind('\\') 

3010 if slashpos > -1: 

3011 s = text[slashpos:] 

3012 if s in latex_symbols: 

3013 # Try to complete a full latex symbol to unicode 

3014 # \\alpha -> α 

3015 return s, [latex_symbols[s]] 

3016 else: 

3017 # If a user has partially typed a latex symbol, give them 

3018 # a full list of options \al -> [\aleph, \alpha] 

3019 matches = [k for k in latex_symbols if k.startswith(s)] 

3020 if matches: 

3021 return s, matches 

3022 return '', () 

3023 

3024 @context_matcher() 

3025 def custom_completer_matcher(self, context): 

3026 """Dispatch custom completer. 

3027 

3028 If a match is found, suppresses all other matchers except for Jedi. 

3029 """ 

3030 matches = self.dispatch_custom_completer(context.token) or [] 

3031 result = _convert_matcher_v1_result_to_v2( 

3032 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True 

3033 ) 

3034 result["ordered"] = True 

3035 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)} 

3036 return result 

3037 

3038 def dispatch_custom_completer(self, text): 

3039 """ 

3040 .. deprecated:: 8.6 

3041 You can use :meth:`custom_completer_matcher` instead. 

3042 """ 

3043 if not self.custom_completers: 

3044 return 

3045 

3046 line = self.line_buffer 

3047 if not line.strip(): 

3048 return None 

3049 

3050 # Create a little structure to pass all the relevant information about 

3051 # the current completion to any custom completer. 

3052 event = SimpleNamespace() 

3053 event.line = line 

3054 event.symbol = text 

3055 cmd = line.split(None,1)[0] 

3056 event.command = cmd 

3057 event.text_until_cursor = self.text_until_cursor 

3058 

3059 # for foo etc, try also to find completer for %foo 

3060 if not cmd.startswith(self.magic_escape): 

3061 try_magic = self.custom_completers.s_matches( 

3062 self.magic_escape + cmd) 

3063 else: 

3064 try_magic = [] 

3065 

3066 for c in itertools.chain(self.custom_completers.s_matches(cmd), 

3067 try_magic, 

3068 self.custom_completers.flat_matches(self.text_until_cursor)): 

3069 try: 

3070 res = c(event) 

3071 if res: 

3072 # first, try case sensitive match 

3073 withcase = [r for r in res if r.startswith(text)] 

3074 if withcase: 

3075 return withcase 

3076 # if none, then case insensitive ones are ok too 

3077 text_low = text.lower() 

3078 return [r for r in res if r.lower().startswith(text_low)] 

3079 except TryNext: 

3080 pass 

3081 except KeyboardInterrupt: 

3082 """ 

3083 If custom completer take too long, 

3084 let keyboard interrupt abort and return nothing. 

3085 """ 

3086 break 

3087 

3088 return None 

3089 

3090 def completions(self, text: str, offset: int)->Iterator[Completion]: 

3091 """ 

3092 Returns an iterator over the possible completions 

3093 

3094 .. warning:: 

3095 

3096 Unstable 

3097 

3098 This function is unstable, API may change without warning. 

3099 It will also raise unless use in proper context manager. 

3100 

3101 Parameters 

3102 ---------- 

3103 text : str 

3104 Full text of the current input, multi line string. 

3105 offset : int 

3106 Integer representing the position of the cursor in ``text``. Offset 

3107 is 0-based indexed. 

3108 

3109 Yields 

3110 ------ 

3111 Completion 

3112 

3113 Notes 

3114 ----- 

3115 The cursor on a text can either be seen as being "in between" 

3116 characters or "On" a character depending on the interface visible to 

3117 the user. For consistency the cursor being on "in between" characters X 

3118 and Y is equivalent to the cursor being "on" character Y, that is to say 

3119 the character the cursor is on is considered as being after the cursor. 

3120 

3121 Combining characters may span more that one position in the 

3122 text. 

3123 

3124 .. note:: 

3125 

3126 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--`` 

3127 fake Completion token to distinguish completion returned by Jedi 

3128 and usual IPython completion. 

3129 

3130 .. note:: 

3131 

3132 Completions are not completely deduplicated yet. If identical 

3133 completions are coming from different sources this function does not 

3134 ensure that each completion object will only be present once. 

3135 """ 

3136 warnings.warn("_complete is a provisional API (as of IPython 6.0). " 

3137 "It may change without warnings. " 

3138 "Use in corresponding context manager.", 

3139 category=ProvisionalCompleterWarning, stacklevel=2) 

3140 

3141 seen = set() 

3142 profiler:Optional[cProfile.Profile] 

3143 try: 

3144 if self.profile_completions: 

3145 import cProfile 

3146 profiler = cProfile.Profile() 

3147 profiler.enable() 

3148 else: 

3149 profiler = None 

3150 

3151 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000): 

3152 if c and (c in seen): 

3153 continue 

3154 yield c 

3155 seen.add(c) 

3156 except KeyboardInterrupt: 

3157 """if completions take too long and users send keyboard interrupt, 

3158 do not crash and return ASAP. """ 

3159 pass 

3160 finally: 

3161 if profiler is not None: 

3162 profiler.disable() 

3163 ensure_dir_exists(self.profiler_output_dir) 

3164 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4())) 

3165 print("Writing profiler output to", output_path) 

3166 profiler.dump_stats(output_path) 

3167 

3168 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]: 

3169 """ 

3170 Core completion module.Same signature as :any:`completions`, with the 

3171 extra `timeout` parameter (in seconds). 

3172 

3173 Computing jedi's completion ``.type`` can be quite expensive (it is a 

3174 lazy property) and can require some warm-up, more warm up than just 

3175 computing the ``name`` of a completion. The warm-up can be : 

3176 

3177 - Long warm-up the first time a module is encountered after 

3178 install/update: actually build parse/inference tree. 

3179 

3180 - first time the module is encountered in a session: load tree from 

3181 disk. 

3182 

3183 We don't want to block completions for tens of seconds so we give the 

3184 completer a "budget" of ``_timeout`` seconds per invocation to compute 

3185 completions types, the completions that have not yet been computed will 

3186 be marked as "unknown" an will have a chance to be computed next round 

3187 are things get cached. 

3188 

3189 Keep in mind that Jedi is not the only thing treating the completion so 

3190 keep the timeout short-ish as if we take more than 0.3 second we still 

3191 have lots of processing to do. 

3192 

3193 """ 

3194 deadline = time.monotonic() + _timeout 

3195 

3196 before = full_text[:offset] 

3197 cursor_line, cursor_column = position_to_cursor(full_text, offset) 

3198 

3199 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3200 

3201 def is_non_jedi_result( 

3202 result: MatcherResult, identifier: str 

3203 ) -> TypeGuard[SimpleMatcherResult]: 

3204 return identifier != jedi_matcher_id 

3205 

3206 results = self._complete( 

3207 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column 

3208 ) 

3209 

3210 non_jedi_results: dict[str, SimpleMatcherResult] = { 

3211 identifier: result 

3212 for identifier, result in results.items() 

3213 if is_non_jedi_result(result, identifier) 

3214 } 

3215 

3216 jedi_matches = ( 

3217 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"] 

3218 if jedi_matcher_id in results 

3219 else () 

3220 ) 

3221 

3222 iter_jm = iter(jedi_matches) 

3223 if _timeout: 

3224 for jm in iter_jm: 

3225 try: 

3226 type_ = jm.type 

3227 except Exception: 

3228 if self.debug: 

3229 print("Error in Jedi getting type of ", jm) 

3230 type_ = None 

3231 delta = len(jm.name_with_symbols) - len(jm.complete) 

3232 if type_ == 'function': 

3233 signature = _make_signature(jm) 

3234 else: 

3235 signature = '' 

3236 yield Completion(start=offset - delta, 

3237 end=offset, 

3238 text=jm.name_with_symbols, 

3239 type=type_, 

3240 signature=signature, 

3241 _origin='jedi') 

3242 

3243 if time.monotonic() > deadline: 

3244 break 

3245 

3246 for jm in iter_jm: 

3247 delta = len(jm.name_with_symbols) - len(jm.complete) 

3248 yield Completion( 

3249 start=offset - delta, 

3250 end=offset, 

3251 text=jm.name_with_symbols, 

3252 type=_UNKNOWN_TYPE, # don't compute type for speed 

3253 _origin="jedi", 

3254 signature="", 

3255 ) 

3256 

3257 # TODO: 

3258 # Suppress this, right now just for debug. 

3259 if jedi_matches and non_jedi_results and self.debug: 

3260 some_start_offset = before.rfind( 

3261 next(iter(non_jedi_results.values()))["matched_fragment"] 

3262 ) 

3263 yield Completion( 

3264 start=some_start_offset, 

3265 end=offset, 

3266 text="--jedi/ipython--", 

3267 _origin="debug", 

3268 type="none", 

3269 signature="", 

3270 ) 

3271 

3272 ordered: list[Completion] = [] 

3273 sortable: list[Completion] = [] 

3274 

3275 for origin, result in non_jedi_results.items(): 

3276 matched_text = result["matched_fragment"] 

3277 start_offset = before.rfind(matched_text) 

3278 is_ordered = result.get("ordered", False) 

3279 container = ordered if is_ordered else sortable 

3280 

3281 # I'm unsure if this is always true, so let's assert and see if it 

3282 # crash 

3283 assert before.endswith(matched_text) 

3284 

3285 for simple_completion in result["completions"]: 

3286 completion = Completion( 

3287 start=start_offset, 

3288 end=offset, 

3289 text=simple_completion.text, 

3290 _origin=origin, 

3291 signature="", 

3292 type=simple_completion.type or _UNKNOWN_TYPE, 

3293 ) 

3294 container.append(completion) 

3295 

3296 yield from list(self._deduplicate(ordered + self._sort(sortable)))[ 

3297 :MATCHES_LIMIT 

3298 ] 

3299 

3300 def complete( 

3301 self, text=None, line_buffer=None, cursor_pos=None 

3302 ) -> tuple[str, Sequence[str]]: 

3303 """Find completions for the given text and line context. 

3304 

3305 Note that both the text and the line_buffer are optional, but at least 

3306 one of them must be given. 

3307 

3308 Parameters 

3309 ---------- 

3310 text : string, optional 

3311 Text to perform the completion on. If not given, the line buffer 

3312 is split using the instance's CompletionSplitter object. 

3313 line_buffer : string, optional 

3314 If not given, the completer attempts to obtain the current line 

3315 buffer via readline. This keyword allows clients which are 

3316 requesting for text completions in non-readline contexts to inform 

3317 the completer of the entire text. 

3318 cursor_pos : int, optional 

3319 Index of the cursor in the full line buffer. Should be provided by 

3320 remote frontends where kernel has no access to frontend state. 

3321 

3322 Returns 

3323 ------- 

3324 Tuple of two items: 

3325 text : str 

3326 Text that was actually used in the completion. 

3327 matches : list 

3328 A list of completion matches. 

3329 

3330 Notes 

3331 ----- 

3332 This API is likely to be deprecated and replaced by 

3333 :any:`IPCompleter.completions` in the future. 

3334 

3335 """ 

3336 warnings.warn('`Completer.complete` is pending deprecation since ' 

3337 'IPython 6.0 and will be replaced by `Completer.completions`.', 

3338 PendingDeprecationWarning) 

3339 # potential todo, FOLD the 3rd throw away argument of _complete 

3340 # into the first 2 one. 

3341 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?) 

3342 # TODO: should we deprecate now, or does it stay? 

3343 

3344 results = self._complete( 

3345 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0 

3346 ) 

3347 

3348 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3349 

3350 return self._arrange_and_extract( 

3351 results, 

3352 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version? 

3353 skip_matchers={jedi_matcher_id}, 

3354 # this API does not support different start/end positions (fragments of token). 

3355 abort_if_offset_changes=True, 

3356 ) 

3357 

3358 def _arrange_and_extract( 

3359 self, 

3360 results: dict[str, MatcherResult], 

3361 skip_matchers: set[str], 

3362 abort_if_offset_changes: bool, 

3363 ): 

3364 sortable: list[AnyMatcherCompletion] = [] 

3365 ordered: list[AnyMatcherCompletion] = [] 

3366 most_recent_fragment = None 

3367 for identifier, result in results.items(): 

3368 if identifier in skip_matchers: 

3369 continue 

3370 if not result["completions"]: 

3371 continue 

3372 if not most_recent_fragment: 

3373 most_recent_fragment = result["matched_fragment"] 

3374 if ( 

3375 abort_if_offset_changes 

3376 and result["matched_fragment"] != most_recent_fragment 

3377 ): 

3378 break 

3379 if result.get("ordered", False): 

3380 ordered.extend(result["completions"]) 

3381 else: 

3382 sortable.extend(result["completions"]) 

3383 

3384 if not most_recent_fragment: 

3385 most_recent_fragment = "" # to satisfy typechecker (and just in case) 

3386 

3387 return most_recent_fragment, [ 

3388 m.text for m in self._deduplicate(ordered + self._sort(sortable)) 

3389 ] 

3390 

3391 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None, 

3392 full_text=None) -> _CompleteResult: 

3393 """ 

3394 Like complete but can also returns raw jedi completions as well as the 

3395 origin of the completion text. This could (and should) be made much 

3396 cleaner but that will be simpler once we drop the old (and stateful) 

3397 :any:`complete` API. 

3398 

3399 With current provisional API, cursor_pos act both (depending on the 

3400 caller) as the offset in the ``text`` or ``line_buffer``, or as the 

3401 ``column`` when passing multiline strings this could/should be renamed 

3402 but would add extra noise. 

3403 

3404 Parameters 

3405 ---------- 

3406 cursor_line 

3407 Index of the line the cursor is on. 0 indexed. 

3408 cursor_pos 

3409 Position of the cursor in the current line/line_buffer/text. 0 

3410 indexed. 

3411 line_buffer : optional, str 

3412 The current line the cursor is in, this is mostly due to legacy 

3413 reason that readline could only give a us the single current line. 

3414 Prefer `full_text`. 

3415 text : str 

3416 The current "token" the cursor is in, mostly also for historical 

3417 reasons. as the completer would trigger only after the current line 

3418 was parsed. 

3419 full_text : str 

3420 Full text of the current cell. 

3421 

3422 Returns 

3423 ------- 

3424 An ordered dictionary where keys are identifiers of completion 

3425 matchers and values are ``MatcherResult``s. 

3426 """ 

3427 

3428 # if the cursor position isn't given, the only sane assumption we can 

3429 # make is that it's at the end of the line (the common case) 

3430 if cursor_pos is None: 

3431 cursor_pos = len(line_buffer) if text is None else len(text) 

3432 

3433 if self.use_main_ns: 

3434 self.namespace = __main__.__dict__ 

3435 

3436 # if text is either None or an empty string, rely on the line buffer 

3437 if (not line_buffer) and full_text: 

3438 line_buffer = full_text.split('\n')[cursor_line] 

3439 if not text: # issue #11508: check line_buffer before calling split_line 

3440 text = ( 

3441 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else "" 

3442 ) 

3443 

3444 # If no line buffer is given, assume the input text is all there was 

3445 if line_buffer is None: 

3446 line_buffer = text 

3447 

3448 # deprecated - do not use `line_buffer` in new code. 

3449 self.line_buffer = line_buffer 

3450 self.text_until_cursor = self.line_buffer[:cursor_pos] 

3451 

3452 if not full_text: 

3453 full_text = line_buffer 

3454 

3455 context = CompletionContext( 

3456 full_text=full_text, 

3457 cursor_position=cursor_pos, 

3458 cursor_line=cursor_line, 

3459 token=text, 

3460 limit=MATCHES_LIMIT, 

3461 ) 

3462 

3463 # Start with a clean slate of completions 

3464 results: dict[str, MatcherResult] = {} 

3465 

3466 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3467 

3468 suppressed_matchers: set[str] = set() 

3469 

3470 matchers = { 

3471 _get_matcher_id(matcher): matcher 

3472 for matcher in sorted( 

3473 self.matchers, key=_get_matcher_priority, reverse=True 

3474 ) 

3475 } 

3476 

3477 for matcher_id, matcher in matchers.items(): 

3478 matcher_id = _get_matcher_id(matcher) 

3479 

3480 if matcher_id in self.disable_matchers: 

3481 continue 

3482 

3483 if matcher_id in results: 

3484 warnings.warn(f"Duplicate matcher ID: {matcher_id}.") 

3485 

3486 if matcher_id in suppressed_matchers: 

3487 continue 

3488 

3489 result: MatcherResult 

3490 try: 

3491 if _is_matcher_v1(matcher): 

3492 result = _convert_matcher_v1_result_to_v2_no_no( 

3493 matcher(text), type=_UNKNOWN_TYPE 

3494 ) 

3495 elif _is_matcher_v2(matcher): 

3496 result = matcher(context) 

3497 else: 

3498 api_version = _get_matcher_api_version(matcher) 

3499 raise ValueError(f"Unsupported API version {api_version}") 

3500 except BaseException: 

3501 # Show the ugly traceback if the matcher causes an 

3502 # exception, but do NOT crash the kernel! 

3503 sys.excepthook(*sys.exc_info()) 

3504 continue 

3505 

3506 # set default value for matched fragment if suffix was not selected. 

3507 result["matched_fragment"] = result.get("matched_fragment", context.token) 

3508 

3509 if not suppressed_matchers: 

3510 suppression_recommended: Union[bool, set[str]] = result.get( 

3511 "suppress", False 

3512 ) 

3513 

3514 suppression_config = ( 

3515 self.suppress_competing_matchers.get(matcher_id, None) 

3516 if isinstance(self.suppress_competing_matchers, dict) 

3517 else self.suppress_competing_matchers 

3518 ) 

3519 should_suppress = ( 

3520 (suppression_config is True) 

3521 or (suppression_recommended and (suppression_config is not False)) 

3522 ) and has_any_completions(result) 

3523 

3524 if should_suppress: 

3525 suppression_exceptions: set[str] = result.get( 

3526 "do_not_suppress", set() 

3527 ) 

3528 if isinstance(suppression_recommended, Iterable): 

3529 to_suppress = set(suppression_recommended) 

3530 else: 

3531 to_suppress = set(matchers) 

3532 suppressed_matchers = to_suppress - suppression_exceptions 

3533 

3534 new_results = {} 

3535 for previous_matcher_id, previous_result in results.items(): 

3536 if previous_matcher_id not in suppressed_matchers: 

3537 new_results[previous_matcher_id] = previous_result 

3538 results = new_results 

3539 

3540 results[matcher_id] = result 

3541 

3542 _, matches = self._arrange_and_extract( 

3543 results, 

3544 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission? 

3545 # if it was omission, we can remove the filtering step, otherwise remove this comment. 

3546 skip_matchers={jedi_matcher_id}, 

3547 abort_if_offset_changes=False, 

3548 ) 

3549 

3550 # populate legacy stateful API 

3551 self.matches = matches 

3552 

3553 return results 

3554 

3555 @staticmethod 

3556 def _deduplicate( 

3557 matches: Sequence[AnyCompletion], 

3558 ) -> Iterable[AnyCompletion]: 

3559 filtered_matches: dict[str, AnyCompletion] = {} 

3560 for match in matches: 

3561 text = match.text 

3562 if ( 

3563 text not in filtered_matches 

3564 or filtered_matches[text].type == _UNKNOWN_TYPE 

3565 ): 

3566 filtered_matches[text] = match 

3567 

3568 return filtered_matches.values() 

3569 

3570 @staticmethod 

3571 def _sort(matches: Sequence[AnyCompletion]): 

3572 return sorted(matches, key=lambda x: completions_sorting_key(x.text)) 

3573 

3574 @context_matcher() 

3575 def fwd_unicode_matcher(self, context: CompletionContext): 

3576 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API.""" 

3577 # TODO: use `context.limit` to terminate early once we matched the maximum 

3578 # number that will be used downstream; can be added as an optional to 

3579 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here. 

3580 fragment, matches = self.fwd_unicode_match(context.text_until_cursor) 

3581 return _convert_matcher_v1_result_to_v2( 

3582 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

3583 ) 

3584 

3585 def fwd_unicode_match(self, text: str) -> tuple[str, Sequence[str]]: 

3586 """ 

3587 Forward match a string starting with a backslash with a list of 

3588 potential Unicode completions. 

3589 

3590 Will compute list of Unicode character names on first call and cache it. 

3591 

3592 .. deprecated:: 8.6 

3593 You can use :meth:`fwd_unicode_matcher` instead. 

3594 

3595 Returns 

3596 ------- 

3597 At tuple with: 

3598 - matched text (empty if no matches) 

3599 - list of potential completions, empty tuple otherwise) 

3600 """ 

3601 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements. 

3602 # We could do a faster match using a Trie. 

3603 

3604 # Using pygtrie the following seem to work: 

3605 

3606 # s = PrefixSet() 

3607 

3608 # for c in range(0,0x10FFFF + 1): 

3609 # try: 

3610 # s.add(unicodedata.name(chr(c))) 

3611 # except ValueError: 

3612 # pass 

3613 # [''.join(k) for k in s.iter(prefix)] 

3614 

3615 # But need to be timed and adds an extra dependency. 

3616 

3617 slashpos = text.rfind('\\') 

3618 # if text starts with slash 

3619 if slashpos > -1: 

3620 # PERF: It's important that we don't access self._unicode_names 

3621 # until we're inside this if-block. _unicode_names is lazily 

3622 # initialized, and it takes a user-noticeable amount of time to 

3623 # initialize it, so we don't want to initialize it unless we're 

3624 # actually going to use it. 

3625 s = text[slashpos + 1 :] 

3626 sup = s.upper() 

3627 candidates = [x for x in self.unicode_names if x.startswith(sup)] 

3628 if candidates: 

3629 return s, candidates 

3630 candidates = [x for x in self.unicode_names if sup in x] 

3631 if candidates: 

3632 return s, candidates 

3633 splitsup = sup.split(" ") 

3634 candidates = [ 

3635 x for x in self.unicode_names if all(u in x for u in splitsup) 

3636 ] 

3637 if candidates: 

3638 return s, candidates 

3639 

3640 return "", () 

3641 

3642 # if text does not start with slash 

3643 else: 

3644 return '', () 

3645 

3646 @property 

3647 def unicode_names(self) -> list[str]: 

3648 """List of names of unicode code points that can be completed. 

3649 

3650 The list is lazily initialized on first access. 

3651 """ 

3652 if self._unicode_names is None: 

3653 names = [] 

3654 for c in range(0,0x10FFFF + 1): 

3655 try: 

3656 names.append(unicodedata.name(chr(c))) 

3657 except ValueError: 

3658 pass 

3659 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES) 

3660 

3661 return self._unicode_names 

3662 

3663 

3664def _unicode_name_compute(ranges: list[tuple[int, int]]) -> list[str]: 

3665 names = [] 

3666 for start,stop in ranges: 

3667 for c in range(start, stop) : 

3668 try: 

3669 names.append(unicodedata.name(chr(c))) 

3670 except ValueError: 

3671 pass 

3672 return names