Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.11/site-packages/IPython/core/completer.py: 19%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1435 statements  

1"""Completion for IPython. 

2 

3This module started as fork of the rlcompleter module in the Python standard 

4library. The original enhancements made to rlcompleter have been sent 

5upstream and were accepted as of Python 2.3, 

6 

7This module now support a wide variety of completion mechanism both available 

8for normal classic Python code, as well as completer for IPython specific 

9Syntax like magics. 

10 

11Latex and Unicode completion 

12============================ 

13 

14IPython and compatible frontends not only can complete your code, but can help 

15you to input a wide range of characters. In particular we allow you to insert 

16a unicode character using the tab completion mechanism. 

17 

18Forward latex/unicode completion 

19-------------------------------- 

20 

21Forward completion allows you to easily type a unicode character using its latex 

22name, or unicode long description. To do so type a backslash follow by the 

23relevant name and press tab: 

24 

25 

26Using latex completion: 

27 

28.. code:: 

29 

30 \\alpha<tab> 

31 α 

32 

33or using unicode completion: 

34 

35 

36.. code:: 

37 

38 \\GREEK SMALL LETTER ALPHA<tab> 

39 α 

40 

41 

42Only valid Python identifiers will complete. Combining characters (like arrow or 

43dots) are also available, unlike latex they need to be put after the their 

44counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``. 

45 

46Some browsers are known to display combining characters incorrectly. 

47 

48Backward latex completion 

49------------------------- 

50 

51It is sometime challenging to know how to type a character, if you are using 

52IPython, or any compatible frontend you can prepend backslash to the character 

53and press :kbd:`Tab` to expand it to its latex form. 

54 

55.. code:: 

56 

57 \\α<tab> 

58 \\alpha 

59 

60 

61Both forward and backward completions can be deactivated by setting the 

62:std:configtrait:`Completer.backslash_combining_completions` option to 

63``False``. 

64 

65 

66Experimental 

67============ 

68 

69Starting with IPython 6.0, this module can make use of the Jedi library to 

70generate completions both using static analysis of the code, and dynamically 

71inspecting multiple namespaces. Jedi is an autocompletion and static analysis 

72for Python. The APIs attached to this new mechanism is unstable and will 

73raise unless use in an :any:`provisionalcompleter` context manager. 

74 

75You will find that the following are experimental: 

76 

77 - :any:`provisionalcompleter` 

78 - :any:`IPCompleter.completions` 

79 - :any:`Completion` 

80 - :any:`rectify_completions` 

81 

82.. note:: 

83 

84 better name for :any:`rectify_completions` ? 

85 

86We welcome any feedback on these new API, and we also encourage you to try this 

87module in debug mode (start IPython with ``--Completer.debug=True``) in order 

88to have extra logging information if :any:`jedi` is crashing, or if current 

89IPython completer pending deprecations are returning results not yet handled 

90by :any:`jedi` 

91 

92Using Jedi for tab completion allow snippets like the following to work without 

93having to execute any code: 

94 

95 >>> myvar = ['hello', 42] 

96 ... myvar[1].bi<tab> 

97 

98Tab completion will be able to infer that ``myvar[1]`` is a real number without 

99executing almost any code unlike the deprecated :any:`IPCompleter.greedy` 

100option. 

101 

102Be sure to update :any:`jedi` to the latest stable version or to try the 

103current development version to get better completions. 

104 

105Matchers 

106======== 

107 

108All completions routines are implemented using unified *Matchers* API. 

109The matchers API is provisional and subject to change without notice. 

110 

111The built-in matchers include: 

112 

113- :any:`IPCompleter.dict_key_matcher`: dictionary key completions, 

114- :any:`IPCompleter.magic_matcher`: completions for magics, 

115- :any:`IPCompleter.unicode_name_matcher`, 

116 :any:`IPCompleter.fwd_unicode_matcher` 

117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_, 

118- :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_, 

119- :any:`IPCompleter.file_matcher`: paths to files and directories, 

120- :any:`IPCompleter.python_func_kw_matcher` - function keywords, 

121- :any:`IPCompleter.python_matches` - globals and attributes (v1 API), 

122- ``IPCompleter.jedi_matcher`` - static analysis with Jedi, 

123- :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default 

124 implementation in :any:`InteractiveShell` which uses IPython hooks system 

125 (`complete_command`) with string dispatch (including regular expressions). 

126 Differently to other matchers, ``custom_completer_matcher`` will not suppress 

127 Jedi results to match behaviour in earlier IPython versions. 

128 

129Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list. 

130 

131Matcher API 

132----------- 

133 

134Simplifying some details, the ``Matcher`` interface can described as 

135 

136.. code-block:: 

137 

138 MatcherAPIv1 = Callable[[str], list[str]] 

139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult] 

140 

141 Matcher = MatcherAPIv1 | MatcherAPIv2 

142 

143The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0 

144and remains supported as a simplest way for generating completions. This is also 

145currently the only API supported by the IPython hooks system `complete_command`. 

146 

147To distinguish between matcher versions ``matcher_api_version`` attribute is used. 

148More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers, 

149and requires a literal ``2`` for v2 Matchers. 

150 

151Once the API stabilises future versions may relax the requirement for specifying 

152``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore 

153please do not rely on the presence of ``matcher_api_version`` for any purposes. 

154 

155Suppression of competing matchers 

156--------------------------------- 

157 

158By default results from all matchers are combined, in the order determined by 

159their priority. Matchers can request to suppress results from subsequent 

160matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``. 

161 

162When multiple matchers simultaneously request suppression, the results from of 

163the matcher with higher priority will be returned. 

164 

165Sometimes it is desirable to suppress most but not all other matchers; 

166this can be achieved by adding a set of identifiers of matchers which 

167should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key. 

168 

169The suppression behaviour can is user-configurable via 

170:std:configtrait:`IPCompleter.suppress_competing_matchers`. 

171""" 

172 

173 

174# Copyright (c) IPython Development Team. 

175# Distributed under the terms of the Modified BSD License. 

176# 

177# Some of this code originated from rlcompleter in the Python standard library 

178# Copyright (C) 2001 Python Software Foundation, www.python.org 

179 

180from __future__ import annotations 

181import builtins as builtin_mod 

182import enum 

183import glob 

184import inspect 

185import itertools 

186import keyword 

187import ast 

188import os 

189import re 

190import string 

191import sys 

192import tokenize 

193import time 

194import unicodedata 

195import uuid 

196import warnings 

197from ast import literal_eval 

198from collections import defaultdict 

199from contextlib import contextmanager 

200from dataclasses import dataclass 

201from functools import cached_property, partial 

202from types import SimpleNamespace 

203from typing import ( 

204 Iterable, 

205 Iterator, 

206 Union, 

207 Any, 

208 Sequence, 

209 Optional, 

210 TYPE_CHECKING, 

211 Sized, 

212 TypeVar, 

213 Literal, 

214) 

215 

216from IPython.core.guarded_eval import ( 

217 guarded_eval, 

218 EvaluationContext, 

219 _validate_policy_overrides, 

220) 

221from IPython.core.error import TryNext, UsageError 

222from IPython.core.inputtransformer2 import ESC_MAGIC 

223from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol 

224from IPython.testing.skipdoctest import skip_doctest 

225from IPython.utils import generics 

226from IPython.utils.PyColorize import theme_table 

227from IPython.utils.decorators import sphinx_options 

228from IPython.utils.dir2 import dir2, get_real_method 

229from IPython.utils.path import ensure_dir_exists 

230from IPython.utils.process import arg_split 

231from traitlets import ( 

232 Bool, 

233 Enum, 

234 Int, 

235 List as ListTrait, 

236 Unicode, 

237 Dict as DictTrait, 

238 DottedObjectName, 

239 Union as UnionTrait, 

240 observe, 

241) 

242from traitlets.config.configurable import Configurable 

243from traitlets.utils.importstring import import_item 

244 

245import __main__ 

246 

247from typing import cast 

248 

249if sys.version_info < (3, 12): 

250 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

251else: 

252 from typing import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

253 

254 

255# skip module docstests 

256__skip_doctest__ = True 

257 

258 

259try: 

260 import jedi 

261 jedi.settings.case_insensitive_completion = False 

262 import jedi.api.helpers 

263 import jedi.api.classes 

264 JEDI_INSTALLED = True 

265except ImportError: 

266 JEDI_INSTALLED = False 

267 

268 

269# ----------------------------------------------------------------------------- 

270# Globals 

271#----------------------------------------------------------------------------- 

272 

273# ranges where we have most of the valid unicode names. We could be more finer 

274# grained but is it worth it for performance While unicode have character in the 

275# range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I 

276# write this). With below range we cover them all, with a density of ~67% 

277# biggest next gap we consider only adds up about 1% density and there are 600 

278# gaps that would need hard coding. 

279_UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)] 

280 

281# Public API 

282__all__ = ["Completer", "IPCompleter"] 

283 

284if sys.platform == 'win32': 

285 PROTECTABLES = ' ' 

286else: 

287 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&' 

288 

289# Protect against returning an enormous number of completions which the frontend 

290# may have trouble processing. 

291MATCHES_LIMIT = 500 

292 

293# Completion type reported when no type can be inferred. 

294_UNKNOWN_TYPE = "<unknown>" 

295 

296# sentinel value to signal lack of a match 

297not_found = object() 

298 

299class ProvisionalCompleterWarning(FutureWarning): 

300 """ 

301 Exception raise by an experimental feature in this module. 

302 

303 Wrap code in :any:`provisionalcompleter` context manager if you 

304 are certain you want to use an unstable feature. 

305 """ 

306 pass 

307 

308warnings.filterwarnings('error', category=ProvisionalCompleterWarning) 

309 

310 

311@skip_doctest 

312@contextmanager 

313def provisionalcompleter(action='ignore'): 

314 """ 

315 This context manager has to be used in any place where unstable completer 

316 behavior and API may be called. 

317 

318 >>> with provisionalcompleter(): 

319 ... completer.do_experimental_things() # works 

320 

321 >>> completer.do_experimental_things() # raises. 

322 

323 .. note:: 

324 

325 Unstable 

326 

327 By using this context manager you agree that the API in use may change 

328 without warning, and that you won't complain if they do so. 

329 

330 You also understand that, if the API is not to your liking, you should report 

331 a bug to explain your use case upstream. 

332 

333 We'll be happy to get your feedback, feature requests, and improvements on 

334 any of the unstable APIs! 

335 """ 

336 with warnings.catch_warnings(): 

337 warnings.filterwarnings(action, category=ProvisionalCompleterWarning) 

338 yield 

339 

340 

341def has_open_quotes(s: str) -> Union[str, bool]: 

342 """Return whether a string has open quotes. 

343 

344 This simply counts whether the number of quote characters of either type in 

345 the string is odd. 

346 

347 Returns 

348 ------- 

349 If there is an open quote, the quote character is returned. Else, return 

350 False. 

351 """ 

352 # We check " first, then ', so complex cases with nested quotes will get 

353 # the " to take precedence. 

354 if s.count('"') % 2: 

355 return '"' 

356 elif s.count("'") % 2: 

357 return "'" 

358 else: 

359 return False 

360 

361 

362def protect_filename(s: str, protectables: str = PROTECTABLES) -> str: 

363 """Escape a string to protect certain characters.""" 

364 if set(s) & set(protectables): 

365 if sys.platform == "win32": 

366 return '"' + s + '"' 

367 else: 

368 return "".join(("\\" + c if c in protectables else c) for c in s) 

369 else: 

370 return s 

371 

372 

373def expand_user(path: str) -> tuple[str, bool, str]: 

374 """Expand ``~``-style usernames in strings. 

375 

376 This is similar to :func:`os.path.expanduser`, but it computes and returns 

377 extra information that will be useful if the input was being used in 

378 computing completions, and you wish to return the completions with the 

379 original '~' instead of its expanded value. 

380 

381 Parameters 

382 ---------- 

383 path : str 

384 String to be expanded. If no ~ is present, the output is the same as the 

385 input. 

386 

387 Returns 

388 ------- 

389 newpath : str 

390 Result of ~ expansion in the input path. 

391 tilde_expand : bool 

392 Whether any expansion was performed or not. 

393 tilde_val : str 

394 The value that ~ was replaced with. 

395 """ 

396 # Default values 

397 tilde_expand = False 

398 tilde_val = '' 

399 newpath = path 

400 

401 if path.startswith('~'): 

402 tilde_expand = True 

403 rest = len(path)-1 

404 newpath = os.path.expanduser(path) 

405 if rest: 

406 tilde_val = newpath[:-rest] 

407 else: 

408 tilde_val = newpath 

409 

410 return newpath, tilde_expand, tilde_val 

411 

412 

413def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str: 

414 """Does the opposite of expand_user, with its outputs. 

415 """ 

416 if tilde_expand: 

417 return path.replace(tilde_val, '~') 

418 else: 

419 return path 

420 

421 

422def completions_sorting_key(word): 

423 """key for sorting completions 

424 

425 This does several things: 

426 

427 - Demote any completions starting with underscores to the end 

428 - Insert any %magic and %%cellmagic completions in the alphabetical order 

429 by their name 

430 """ 

431 prio1, prio2 = 0, 0 

432 

433 if word.startswith('__'): 

434 prio1 = 2 

435 elif word.startswith('_'): 

436 prio1 = 1 

437 

438 if word.endswith('='): 

439 prio1 = -1 

440 

441 if word.startswith('%%'): 

442 # If there's another % in there, this is something else, so leave it alone 

443 if "%" not in word[2:]: 

444 word = word[2:] 

445 prio2 = 2 

446 elif word.startswith('%'): 

447 if "%" not in word[1:]: 

448 word = word[1:] 

449 prio2 = 1 

450 

451 return prio1, word, prio2 

452 

453 

454class _FakeJediCompletion: 

455 """ 

456 This is a workaround to communicate to the UI that Jedi has crashed and to 

457 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true. 

458 

459 Added in IPython 6.0 so should likely be removed for 7.0 

460 

461 """ 

462 

463 def __init__(self, name): 

464 

465 self.name = name 

466 self.complete = name 

467 self.type = 'crashed' 

468 self.name_with_symbols = name 

469 self.signature = "" 

470 self._origin = "fake" 

471 self.text = "crashed" 

472 

473 def __repr__(self): 

474 return '<Fake completion object jedi has crashed>' 

475 

476 

477_JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion] 

478 

479 

480class Completion: 

481 """ 

482 Completion object used and returned by IPython completers. 

483 

484 .. warning:: 

485 

486 Unstable 

487 

488 This function is unstable, API may change without warning. 

489 It will also raise unless use in proper context manager. 

490 

491 This act as a middle ground :any:`Completion` object between the 

492 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion 

493 object. While Jedi need a lot of information about evaluator and how the 

494 code should be ran/inspected, PromptToolkit (and other frontend) mostly 

495 need user facing information. 

496 

497 - Which range should be replaced replaced by what. 

498 - Some metadata (like completion type), or meta information to displayed to 

499 the use user. 

500 

501 For debugging purpose we can also store the origin of the completion (``jedi``, 

502 ``IPython.python_matches``, ``IPython.magics_matches``...). 

503 """ 

504 

505 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin'] 

506 

507 def __init__( 

508 self, 

509 start: int, 

510 end: int, 

511 text: str, 

512 *, 

513 type: Optional[str] = None, 

514 _origin="", 

515 signature="", 

516 ) -> None: 

517 warnings.warn( 

518 "``Completion`` is a provisional API (as of IPython 6.0). " 

519 "It may change without warnings. " 

520 "Use in corresponding context manager.", 

521 category=ProvisionalCompleterWarning, 

522 stacklevel=2, 

523 ) 

524 

525 self.start = start 

526 self.end = end 

527 self.text = text 

528 self.type = type 

529 self.signature = signature 

530 self._origin = _origin 

531 

532 def __repr__(self): 

533 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \ 

534 (self.start, self.end, self.text, self.type or '?', self.signature or '?') 

535 

536 def __eq__(self, other) -> bool: 

537 """ 

538 Equality and hash do not hash the type (as some completer may not be 

539 able to infer the type), but are use to (partially) de-duplicate 

540 completion. 

541 

542 Completely de-duplicating completion is a bit tricker that just 

543 comparing as it depends on surrounding text, which Completions are not 

544 aware of. 

545 """ 

546 return self.start == other.start and \ 

547 self.end == other.end and \ 

548 self.text == other.text 

549 

550 def __hash__(self): 

551 return hash((self.start, self.end, self.text)) 

552 

553 

554class SimpleCompletion: 

555 """Completion item to be included in the dictionary returned by new-style Matcher (API v2). 

556 

557 .. warning:: 

558 

559 Provisional 

560 

561 This class is used to describe the currently supported attributes of 

562 simple completion items, and any additional implementation details 

563 should not be relied on. Additional attributes may be included in 

564 future versions, and meaning of text disambiguated from the current 

565 dual meaning of "text to insert" and "text to used as a label". 

566 """ 

567 

568 __slots__ = ["text", "type"] 

569 

570 def __init__(self, text: str, *, type: Optional[str] = None): 

571 self.text = text 

572 self.type = type 

573 

574 def __repr__(self): 

575 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>" 

576 

577 

578class _MatcherResultBase(TypedDict): 

579 """Definition of dictionary to be returned by new-style Matcher (API v2).""" 

580 

581 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token. 

582 matched_fragment: NotRequired[str] 

583 

584 #: Whether to suppress results from all other matchers (True), some 

585 #: matchers (set of identifiers) or none (False); default is False. 

586 suppress: NotRequired[Union[bool, set[str]]] 

587 

588 #: Identifiers of matchers which should NOT be suppressed when this matcher 

589 #: requests to suppress all other matchers; defaults to an empty set. 

590 do_not_suppress: NotRequired[set[str]] 

591 

592 #: Are completions already ordered and should be left as-is? default is False. 

593 ordered: NotRequired[bool] 

594 

595 

596@sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"]) 

597class SimpleMatcherResult(_MatcherResultBase, TypedDict): 

598 """Result of new-style completion matcher.""" 

599 

600 # note: TypedDict is added again to the inheritance chain 

601 # in order to get __orig_bases__ for documentation 

602 

603 #: List of candidate completions 

604 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion] 

605 

606 

607class _JediMatcherResult(_MatcherResultBase): 

608 """Matching result returned by Jedi (will be processed differently)""" 

609 

610 #: list of candidate completions 

611 completions: Iterator[_JediCompletionLike] 

612 

613 

614AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion] 

615AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion) 

616 

617 

618@dataclass 

619class CompletionContext: 

620 """Completion context provided as an argument to matchers in the Matcher API v2.""" 

621 

622 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`) 

623 # which was not explicitly visible as an argument of the matcher, making any refactor 

624 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers 

625 # from the completer, and make substituting them in sub-classes easier. 

626 

627 #: Relevant fragment of code directly preceding the cursor. 

628 #: The extraction of token is implemented via splitter heuristic 

629 #: (following readline behaviour for legacy reasons), which is user configurable 

630 #: (by switching the greedy mode). 

631 token: str 

632 

633 #: The full available content of the editor or buffer 

634 full_text: str 

635 

636 #: Cursor position in the line (the same for ``full_text`` and ``text``). 

637 cursor_position: int 

638 

639 #: Cursor line in ``full_text``. 

640 cursor_line: int 

641 

642 #: The maximum number of completions that will be used downstream. 

643 #: Matchers can use this information to abort early. 

644 #: The built-in Jedi matcher is currently excepted from this limit. 

645 # If not given, return all possible completions. 

646 limit: Optional[int] 

647 

648 @cached_property 

649 def text_until_cursor(self) -> str: 

650 return self.line_with_cursor[: self.cursor_position] 

651 

652 @cached_property 

653 def line_with_cursor(self) -> str: 

654 return self.full_text.split("\n")[self.cursor_line] 

655 

656 

657#: Matcher results for API v2. 

658MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult] 

659 

660 

661class _MatcherAPIv1Base(Protocol): 

662 def __call__(self, text: str) -> list[str]: 

663 """Call signature.""" 

664 ... 

665 

666 #: Used to construct the default matcher identifier 

667 __qualname__: str 

668 

669 

670class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol): 

671 #: API version 

672 matcher_api_version: Optional[Literal[1]] 

673 

674 def __call__(self, text: str) -> list[str]: 

675 """Call signature.""" 

676 ... 

677 

678 

679#: Protocol describing Matcher API v1. 

680MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total] 

681 

682 

683class MatcherAPIv2(Protocol): 

684 """Protocol describing Matcher API v2.""" 

685 

686 #: API version 

687 matcher_api_version: Literal[2] = 2 

688 

689 def __call__(self, context: CompletionContext) -> MatcherResult: 

690 """Call signature.""" 

691 ... 

692 

693 #: Used to construct the default matcher identifier 

694 __qualname__: str 

695 

696 

697Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2] 

698 

699 

700def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]: 

701 api_version = _get_matcher_api_version(matcher) 

702 return api_version == 1 

703 

704 

705def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]: 

706 api_version = _get_matcher_api_version(matcher) 

707 return api_version == 2 

708 

709 

710def _is_sizable(value: Any) -> TypeGuard[Sized]: 

711 """Determines whether objects is sizable""" 

712 return hasattr(value, "__len__") 

713 

714 

715def _is_iterator(value: Any) -> TypeGuard[Iterator]: 

716 """Determines whether objects is sizable""" 

717 return hasattr(value, "__next__") 

718 

719 

720def has_any_completions(result: MatcherResult) -> bool: 

721 """Check if any result includes any completions.""" 

722 completions = result["completions"] 

723 if _is_sizable(completions): 

724 return len(completions) != 0 

725 if _is_iterator(completions): 

726 try: 

727 old_iterator = completions 

728 first = next(old_iterator) 

729 result["completions"] = cast( 

730 Iterator[SimpleCompletion], 

731 itertools.chain([first], old_iterator), 

732 ) 

733 return True 

734 except StopIteration: 

735 return False 

736 raise ValueError( 

737 "Completions returned by matcher need to be an Iterator or a Sizable" 

738 ) 

739 

740 

741def completion_matcher( 

742 *, 

743 priority: Optional[float] = None, 

744 identifier: Optional[str] = None, 

745 api_version: int = 1, 

746) -> Callable[[Matcher], Matcher]: 

747 """Adds attributes describing the matcher. 

748 

749 Parameters 

750 ---------- 

751 priority : Optional[float] 

752 The priority of the matcher, determines the order of execution of matchers. 

753 Higher priority means that the matcher will be executed first. Defaults to 0. 

754 identifier : Optional[str] 

755 identifier of the matcher allowing users to modify the behaviour via traitlets, 

756 and also used to for debugging (will be passed as ``origin`` with the completions). 

757 

758 Defaults to matcher function's ``__qualname__`` (for example, 

759 ``IPCompleter.file_matcher`` for the built-in matched defined 

760 as a ``file_matcher`` method of the ``IPCompleter`` class). 

761 api_version: Optional[int] 

762 version of the Matcher API used by this matcher. 

763 Currently supported values are 1 and 2. 

764 Defaults to 1. 

765 """ 

766 

767 def wrapper(func: Matcher): 

768 func.matcher_priority = priority or 0 # type: ignore 

769 func.matcher_identifier = identifier or func.__qualname__ # type: ignore 

770 func.matcher_api_version = api_version # type: ignore 

771 if TYPE_CHECKING: 

772 if api_version == 1: 

773 func = cast(MatcherAPIv1, func) 

774 elif api_version == 2: 

775 func = cast(MatcherAPIv2, func) 

776 return func 

777 

778 return wrapper 

779 

780 

781def _get_matcher_priority(matcher: Matcher): 

782 return getattr(matcher, "matcher_priority", 0) 

783 

784 

785def _get_matcher_id(matcher: Matcher): 

786 return getattr(matcher, "matcher_identifier", matcher.__qualname__) 

787 

788 

789def _get_matcher_api_version(matcher): 

790 return getattr(matcher, "matcher_api_version", 1) 

791 

792 

793context_matcher = partial(completion_matcher, api_version=2) 

794 

795 

796_IC = Iterable[Completion] 

797 

798 

799def _deduplicate_completions(text: str, completions: _IC)-> _IC: 

800 """ 

801 Deduplicate a set of completions. 

802 

803 .. warning:: 

804 

805 Unstable 

806 

807 This function is unstable, API may change without warning. 

808 

809 Parameters 

810 ---------- 

811 text : str 

812 text that should be completed. 

813 completions : Iterator[Completion] 

814 iterator over the completions to deduplicate 

815 

816 Yields 

817 ------ 

818 `Completions` objects 

819 Completions coming from multiple sources, may be different but end up having 

820 the same effect when applied to ``text``. If this is the case, this will 

821 consider completions as equal and only emit the first encountered. 

822 Not folded in `completions()` yet for debugging purpose, and to detect when 

823 the IPython completer does return things that Jedi does not, but should be 

824 at some point. 

825 """ 

826 completions = list(completions) 

827 if not completions: 

828 return 

829 

830 new_start = min(c.start for c in completions) 

831 new_end = max(c.end for c in completions) 

832 

833 seen = set() 

834 for c in completions: 

835 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

836 if new_text not in seen: 

837 yield c 

838 seen.add(new_text) 

839 

840 

841def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC: 

842 """ 

843 Rectify a set of completions to all have the same ``start`` and ``end`` 

844 

845 .. warning:: 

846 

847 Unstable 

848 

849 This function is unstable, API may change without warning. 

850 It will also raise unless use in proper context manager. 

851 

852 Parameters 

853 ---------- 

854 text : str 

855 text that should be completed. 

856 completions : Iterator[Completion] 

857 iterator over the completions to rectify 

858 _debug : bool 

859 Log failed completion 

860 

861 Notes 

862 ----- 

863 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though 

864 the Jupyter Protocol requires them to behave like so. This will readjust 

865 the completion to have the same ``start`` and ``end`` by padding both 

866 extremities with surrounding text. 

867 

868 During stabilisation should support a ``_debug`` option to log which 

869 completion are return by the IPython completer and not found in Jedi in 

870 order to make upstream bug report. 

871 """ 

872 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). " 

873 "It may change without warnings. " 

874 "Use in corresponding context manager.", 

875 category=ProvisionalCompleterWarning, stacklevel=2) 

876 

877 completions = list(completions) 

878 if not completions: 

879 return 

880 starts = (c.start for c in completions) 

881 ends = (c.end for c in completions) 

882 

883 new_start = min(starts) 

884 new_end = max(ends) 

885 

886 seen_jedi = set() 

887 seen_python_matches = set() 

888 for c in completions: 

889 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

890 if c._origin == 'jedi': 

891 seen_jedi.add(new_text) 

892 elif c._origin == "IPCompleter.python_matcher": 

893 seen_python_matches.add(new_text) 

894 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature) 

895 diff = seen_python_matches.difference(seen_jedi) 

896 if diff and _debug: 

897 print('IPython.python matches have extras:', diff) 

898 

899 

900if sys.platform == 'win32': 

901 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?' 

902else: 

903 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?' 

904 

905GREEDY_DELIMS = ' =\r\n' 

906 

907 

908class CompletionSplitter: 

909 """An object to split an input line in a manner similar to readline. 

910 

911 By having our own implementation, we can expose readline-like completion in 

912 a uniform manner to all frontends. This object only needs to be given the 

913 line of text to be split and the cursor position on said line, and it 

914 returns the 'word' to be completed on at the cursor after splitting the 

915 entire line. 

916 

917 What characters are used as splitting delimiters can be controlled by 

918 setting the ``delims`` attribute (this is a property that internally 

919 automatically builds the necessary regular expression)""" 

920 

921 # Private interface 

922 

923 # A string of delimiter characters. The default value makes sense for 

924 # IPython's most typical usage patterns. 

925 _delims = DELIMS 

926 

927 # The expression (a normal string) to be compiled into a regular expression 

928 # for actual splitting. We store it as an attribute mostly for ease of 

929 # debugging, since this type of code can be so tricky to debug. 

930 _delim_expr = None 

931 

932 # The regular expression that does the actual splitting 

933 _delim_re = None 

934 

935 def __init__(self, delims=None): 

936 delims = CompletionSplitter._delims if delims is None else delims 

937 self.delims = delims 

938 

939 @property 

940 def delims(self): 

941 """Return the string of delimiter characters.""" 

942 return self._delims 

943 

944 @delims.setter 

945 def delims(self, delims): 

946 """Set the delimiters for line splitting.""" 

947 expr = '[' + ''.join('\\'+ c for c in delims) + ']' 

948 self._delim_re = re.compile(expr) 

949 self._delims = delims 

950 self._delim_expr = expr 

951 

952 def split_line(self, line, cursor_pos=None): 

953 """Split a line of text with a cursor at the given position. 

954 """ 

955 cut_line = line if cursor_pos is None else line[:cursor_pos] 

956 return self._delim_re.split(cut_line)[-1] 

957 

958 

959class Completer(Configurable): 

960 

961 greedy = Bool( 

962 False, 

963 help="""Activate greedy completion. 

964 

965 .. deprecated:: 8.8 

966 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead. 

967 

968 When enabled in IPython 8.8 or newer, changes configuration as follows: 

969 

970 - ``Completer.evaluation = 'unsafe'`` 

971 - ``Completer.auto_close_dict_keys = True`` 

972 """, 

973 ).tag(config=True) 

974 

975 evaluation = Enum( 

976 ("forbidden", "minimal", "limited", "unsafe", "dangerous"), 

977 default_value="limited", 

978 help="""Policy for code evaluation under completion. 

979 

980 Successive options allow to enable more eager evaluation for better 

981 completion suggestions, including for nested dictionaries, nested lists, 

982 or even results of function calls. 

983 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user 

984 code on :kbd:`Tab` with potentially unwanted or dangerous side effects. 

985 

986 Allowed values are: 

987 

988 - ``forbidden``: no evaluation of code is permitted, 

989 - ``minimal``: evaluation of literals and access to built-in namespace; 

990 no item/attribute evaluation, no access to locals/globals, 

991 no evaluation of any operations or comparisons. 

992 - ``limited``: access to all namespaces, evaluation of hard-coded methods 

993 (for example: :any:`dict.keys`, :any:`object.__getattr__`, 

994 :any:`object.__getitem__`) on allow-listed objects (for example: 

995 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``), 

996 - ``unsafe``: evaluation of all methods and function calls but not of 

997 syntax with side-effects like `del x`, 

998 - ``dangerous``: completely arbitrary evaluation; does not support auto-import. 

999 

1000 To override specific elements of the policy, you can use ``policy_overrides`` trait. 

1001 """, 

1002 ).tag(config=True) 

1003 

1004 use_jedi = Bool(default_value=JEDI_INSTALLED, 

1005 help="Experimental: Use Jedi to generate autocompletions. " 

1006 "Default to True if jedi is installed.").tag(config=True) 

1007 

1008 jedi_compute_type_timeout = Int(default_value=400, 

1009 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types. 

1010 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt 

1011 performance by preventing jedi to build its cache. 

1012 """).tag(config=True) 

1013 

1014 debug = Bool(default_value=False, 

1015 help='Enable debug for the Completer. Mostly print extra ' 

1016 'information for experimental jedi integration.')\ 

1017 .tag(config=True) 

1018 

1019 backslash_combining_completions = Bool(True, 

1020 help="Enable unicode completions, e.g. \\alpha<tab> . " 

1021 "Includes completion of latex commands, unicode names, and expanding " 

1022 "unicode characters back to latex commands.").tag(config=True) 

1023 

1024 auto_close_dict_keys = Bool( 

1025 False, 

1026 help=""" 

1027 Enable auto-closing dictionary keys. 

1028 

1029 When enabled string keys will be suffixed with a final quote 

1030 (matching the opening quote), tuple keys will also receive a 

1031 separating comma if needed, and keys which are final will 

1032 receive a closing bracket (``]``). 

1033 """, 

1034 ).tag(config=True) 

1035 

1036 policy_overrides = DictTrait( 

1037 default_value={}, 

1038 key_trait=Unicode(), 

1039 help="""Overrides for policy evaluation. 

1040 

1041 For example, to enable auto-import on completion specify: 

1042 

1043 .. code-block:: 

1044 

1045 ipython --Completer.policy_overrides='{"allow_auto_import": True}' --Completer.use_jedi=False 

1046 

1047 """, 

1048 ).tag(config=True) 

1049 

1050 @observe("evaluation") 

1051 def _evaluation_changed(self, _change): 

1052 _validate_policy_overrides( 

1053 policy_name=self.evaluation, policy_overrides=self.policy_overrides 

1054 ) 

1055 

1056 @observe("policy_overrides") 

1057 def _policy_overrides_changed(self, _change): 

1058 _validate_policy_overrides( 

1059 policy_name=self.evaluation, policy_overrides=self.policy_overrides 

1060 ) 

1061 

1062 auto_import_method = DottedObjectName( 

1063 default_value="importlib.import_module", 

1064 allow_none=True, 

1065 help="""\ 

1066 Provisional: 

1067 This is a provisional API in IPython 9.3, it may change without warnings. 

1068 

1069 A fully qualified path to an auto-import method for use by completer. 

1070 The function should take a single string and return `ModuleType` and 

1071 can raise `ImportError` exception if module is not found. 

1072 

1073 The default auto-import implementation does not populate the user namespace with the imported module. 

1074 """, 

1075 ).tag(config=True) 

1076 

1077 def __init__(self, namespace=None, global_namespace=None, **kwargs): 

1078 """Create a new completer for the command line. 

1079 

1080 Completer(namespace=ns, global_namespace=ns2) -> completer instance. 

1081 

1082 If unspecified, the default namespace where completions are performed 

1083 is __main__ (technically, __main__.__dict__). Namespaces should be 

1084 given as dictionaries. 

1085 

1086 An optional second namespace can be given. This allows the completer 

1087 to handle cases where both the local and global scopes need to be 

1088 distinguished. 

1089 """ 

1090 

1091 # Don't bind to namespace quite yet, but flag whether the user wants a 

1092 # specific namespace or to use __main__.__dict__. This will allow us 

1093 # to bind to __main__.__dict__ at completion time, not now. 

1094 if namespace is None: 

1095 self.use_main_ns = True 

1096 else: 

1097 self.use_main_ns = False 

1098 self.namespace = namespace 

1099 

1100 # The global namespace, if given, can be bound directly 

1101 if global_namespace is None: 

1102 self.global_namespace = {} 

1103 else: 

1104 self.global_namespace = global_namespace 

1105 

1106 self.custom_matchers = [] 

1107 

1108 super(Completer, self).__init__(**kwargs) 

1109 

1110 def complete(self, text, state): 

1111 """Return the next possible completion for 'text'. 

1112 

1113 This is called successively with state == 0, 1, 2, ... until it 

1114 returns None. The completion should begin with 'text'. 

1115 

1116 """ 

1117 if self.use_main_ns: 

1118 self.namespace = __main__.__dict__ 

1119 

1120 if state == 0: 

1121 if "." in text: 

1122 self.matches = self.attr_matches(text) 

1123 else: 

1124 self.matches = self.global_matches(text) 

1125 try: 

1126 return self.matches[state] 

1127 except IndexError: 

1128 return None 

1129 

1130 def global_matches(self, text: str, context: Optional[CompletionContext] = None): 

1131 """Compute matches when text is a simple name. 

1132 

1133 Return a list of all keywords, built-in functions and names currently 

1134 defined in self.namespace or self.global_namespace that match. 

1135 

1136 """ 

1137 matches = [] 

1138 match_append = matches.append 

1139 n = len(text) 

1140 

1141 search_lists = [ 

1142 keyword.kwlist, 

1143 builtin_mod.__dict__.keys(), 

1144 list(self.namespace.keys()), 

1145 list(self.global_namespace.keys()), 

1146 ] 

1147 if context and context.full_text.count("\n") > 1: 

1148 # try to evaluate on full buffer 

1149 previous_lines = "\n".join( 

1150 context.full_text.split("\n")[: context.cursor_line] 

1151 ) 

1152 if previous_lines: 

1153 all_code_lines_before_cursor = ( 

1154 self._extract_code(previous_lines) + "\n" + text 

1155 ) 

1156 context = EvaluationContext( 

1157 globals=self.global_namespace, 

1158 locals=self.namespace, 

1159 evaluation=self.evaluation, 

1160 auto_import=self._auto_import, 

1161 policy_overrides=self.policy_overrides, 

1162 ) 

1163 try: 

1164 obj = guarded_eval( 

1165 all_code_lines_before_cursor, 

1166 context, 

1167 ) 

1168 except Exception as e: 

1169 if self.debug: 

1170 warnings.warn(f"Evaluation exception {e}") 

1171 

1172 search_lists.append(list(context.transient_locals.keys())) 

1173 

1174 for lst in search_lists: 

1175 for word in lst: 

1176 if word[:n] == text and word != "__builtins__": 

1177 match_append(word) 

1178 

1179 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z") 

1180 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]: 

1181 shortened = { 

1182 "_".join([sub[0] for sub in word.split("_")]): word 

1183 for word in lst 

1184 if snake_case_re.match(word) 

1185 } 

1186 for word in shortened.keys(): 

1187 if word[:n] == text and word != "__builtins__": 

1188 match_append(shortened[word]) 

1189 

1190 return matches 

1191 

1192 def attr_matches(self, text): 

1193 """Compute matches when text contains a dot. 

1194 

1195 Assuming the text is of the form NAME.NAME....[NAME], and is 

1196 evaluatable in self.namespace or self.global_namespace, it will be 

1197 evaluated and its attributes (as revealed by dir()) are used as 

1198 possible completions. (For class instances, class members are 

1199 also considered.) 

1200 

1201 WARNING: this can still invoke arbitrary C code, if an object 

1202 with a __getattr__ hook is evaluated. 

1203 

1204 """ 

1205 return self._attr_matches(text)[0] 

1206 

1207 # we simple attribute matching with normal identifiers. 

1208 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$") 

1209 

1210 def _strip_code_before_operator(self, code: str) -> str: 

1211 o_parens = {"(", "[", "{"} 

1212 c_parens = {")", "]", "}"} 

1213 

1214 # Dry-run tokenize to catch errors 

1215 try: 

1216 _ = list(tokenize.generate_tokens(iter(code.splitlines()).__next__)) 

1217 except tokenize.TokenError: 

1218 # Try trimming the expression and retrying 

1219 trimmed_code = self._trim_expr(code) 

1220 try: 

1221 _ = list( 

1222 tokenize.generate_tokens(iter(trimmed_code.splitlines()).__next__) 

1223 ) 

1224 code = trimmed_code 

1225 except tokenize.TokenError: 

1226 return code 

1227 

1228 tokens = _parse_tokens(code) 

1229 encountered_operator = False 

1230 after_operator = [] 

1231 nesting_level = 0 

1232 

1233 for t in tokens: 

1234 if t.type == tokenize.OP: 

1235 if t.string in o_parens: 

1236 nesting_level += 1 

1237 elif t.string in c_parens: 

1238 nesting_level -= 1 

1239 elif t.string != "." and nesting_level == 0: 

1240 encountered_operator = True 

1241 after_operator = [] 

1242 continue 

1243 

1244 if encountered_operator: 

1245 after_operator.append(t.string) 

1246 

1247 if encountered_operator: 

1248 return "".join(after_operator) 

1249 else: 

1250 return code 

1251 

1252 def _extract_code(self, line: str): 

1253 """No-op in Completer, but can be used in subclasses to customise behaviour""" 

1254 return line 

1255 

1256 def _attr_matches( 

1257 self, 

1258 text: str, 

1259 include_prefix: bool = True, 

1260 context: Optional[CompletionContext] = None, 

1261 ) -> tuple[Sequence[str], str]: 

1262 m2 = self._ATTR_MATCH_RE.match(text) 

1263 if not m2: 

1264 return [], "" 

1265 expr, attr = m2.group(1, 2) 

1266 try: 

1267 expr = self._strip_code_before_operator(expr) 

1268 except tokenize.TokenError: 

1269 pass 

1270 

1271 obj = self._evaluate_expr(expr) 

1272 if obj is not_found: 

1273 if context: 

1274 # try to evaluate on full buffer 

1275 previous_lines = "\n".join( 

1276 context.full_text.split("\n")[: context.cursor_line] 

1277 ) 

1278 if previous_lines: 

1279 all_code_lines_before_cursor = ( 

1280 self._extract_code(previous_lines) + "\n" + expr 

1281 ) 

1282 obj = self._evaluate_expr(all_code_lines_before_cursor) 

1283 

1284 if obj is not_found: 

1285 return [], "" 

1286 

1287 if self.limit_to__all__ and hasattr(obj, '__all__'): 

1288 words = get__all__entries(obj) 

1289 else: 

1290 words = dir2(obj) 

1291 

1292 try: 

1293 words = generics.complete_object(obj, words) 

1294 except TryNext: 

1295 pass 

1296 except AssertionError: 

1297 raise 

1298 except Exception: 

1299 # Silence errors from completion function 

1300 pass 

1301 # Build match list to return 

1302 n = len(attr) 

1303 

1304 # Note: ideally we would just return words here and the prefix 

1305 # reconciliator would know that we intend to append to rather than 

1306 # replace the input text; this requires refactoring to return range 

1307 # which ought to be replaced (as does jedi). 

1308 if include_prefix: 

1309 tokens = _parse_tokens(expr) 

1310 rev_tokens = reversed(tokens) 

1311 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1312 name_turn = True 

1313 

1314 parts = [] 

1315 for token in rev_tokens: 

1316 if token.type in skip_over: 

1317 continue 

1318 if token.type == tokenize.NAME and name_turn: 

1319 parts.append(token.string) 

1320 name_turn = False 

1321 elif ( 

1322 token.type == tokenize.OP and token.string == "." and not name_turn 

1323 ): 

1324 parts.append(token.string) 

1325 name_turn = True 

1326 else: 

1327 # short-circuit if not empty nor name token 

1328 break 

1329 

1330 prefix_after_space = "".join(reversed(parts)) 

1331 else: 

1332 prefix_after_space = "" 

1333 

1334 return ( 

1335 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr], 

1336 "." + attr, 

1337 ) 

1338 

1339 def _trim_expr(self, code: str) -> str: 

1340 """ 

1341 Trim the code until it is a valid expression and not a tuple; 

1342 

1343 return the trimmed expression for guarded_eval. 

1344 """ 

1345 while code: 

1346 code = code[1:] 

1347 try: 

1348 res = ast.parse(code) 

1349 except SyntaxError: 

1350 continue 

1351 

1352 assert res is not None 

1353 if len(res.body) != 1: 

1354 continue 

1355 if not isinstance(res.body[0], ast.Expr): 

1356 continue 

1357 expr = res.body[0].value 

1358 if isinstance(expr, ast.Tuple) and not code[-1] == ")": 

1359 # we skip implicit tuple, like when trimming `fun(a,b`<completion> 

1360 # as `a,b` would be a tuple, and we actually expect to get only `b` 

1361 continue 

1362 return code 

1363 return "" 

1364 

1365 def _evaluate_expr(self, expr): 

1366 obj = not_found 

1367 done = False 

1368 while not done and expr: 

1369 try: 

1370 obj = guarded_eval( 

1371 expr, 

1372 EvaluationContext( 

1373 globals=self.global_namespace, 

1374 locals=self.namespace, 

1375 evaluation=self.evaluation, 

1376 auto_import=self._auto_import, 

1377 policy_overrides=self.policy_overrides, 

1378 ), 

1379 ) 

1380 done = True 

1381 except (SyntaxError, TypeError) as e: 

1382 if self.debug: 

1383 warnings.warn(f"Trimming because of {e}") 

1384 # TypeError can show up with something like `+ d` 

1385 # where `d` is a dictionary. 

1386 

1387 # trim the expression to remove any invalid prefix 

1388 # e.g. user starts `(d[`, so we get `expr = '(d'`, 

1389 # where parenthesis is not closed. 

1390 # TODO: make this faster by reusing parts of the computation? 

1391 expr = self._trim_expr(expr) 

1392 except Exception as e: 

1393 if self.debug: 

1394 warnings.warn(f"Evaluation exception {e}") 

1395 done = True 

1396 if self.debug: 

1397 warnings.warn(f"Resolved to {obj}") 

1398 return obj 

1399 

1400 @property 

1401 def _auto_import(self): 

1402 if self.auto_import_method is None: 

1403 return None 

1404 if not hasattr(self, "_auto_import_func"): 

1405 self._auto_import_func = import_item(self.auto_import_method) 

1406 return self._auto_import_func 

1407 

1408 

1409def get__all__entries(obj): 

1410 """returns the strings in the __all__ attribute""" 

1411 try: 

1412 words = getattr(obj, '__all__') 

1413 except Exception: 

1414 return [] 

1415 

1416 return [w for w in words if isinstance(w, str)] 

1417 

1418 

1419class _DictKeyState(enum.Flag): 

1420 """Represent state of the key match in context of other possible matches. 

1421 

1422 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple. 

1423 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`. 

1424 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added. 

1425 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}` 

1426 """ 

1427 

1428 BASELINE = 0 

1429 END_OF_ITEM = enum.auto() 

1430 END_OF_TUPLE = enum.auto() 

1431 IN_TUPLE = enum.auto() 

1432 

1433 

1434def _parse_tokens(c): 

1435 """Parse tokens even if there is an error.""" 

1436 tokens = [] 

1437 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__) 

1438 while True: 

1439 try: 

1440 tokens.append(next(token_generator)) 

1441 except tokenize.TokenError: 

1442 return tokens 

1443 except StopIteration: 

1444 return tokens 

1445 

1446 

1447def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]: 

1448 """Match any valid Python numeric literal in a prefix of dictionary keys. 

1449 

1450 References: 

1451 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals 

1452 - https://docs.python.org/3/library/tokenize.html 

1453 """ 

1454 if prefix[-1].isspace(): 

1455 # if user typed a space we do not have anything to complete 

1456 # even if there was a valid number token before 

1457 return None 

1458 tokens = _parse_tokens(prefix) 

1459 rev_tokens = reversed(tokens) 

1460 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1461 number = None 

1462 for token in rev_tokens: 

1463 if token.type in skip_over: 

1464 continue 

1465 if number is None: 

1466 if token.type == tokenize.NUMBER: 

1467 number = token.string 

1468 continue 

1469 else: 

1470 # we did not match a number 

1471 return None 

1472 if token.type == tokenize.OP: 

1473 if token.string == ",": 

1474 break 

1475 if token.string in {"+", "-"}: 

1476 number = token.string + number 

1477 else: 

1478 return None 

1479 return number 

1480 

1481 

1482_INT_FORMATS = { 

1483 "0b": bin, 

1484 "0o": oct, 

1485 "0x": hex, 

1486} 

1487 

1488 

1489def match_dict_keys( 

1490 keys: list[Union[str, bytes, tuple[Union[str, bytes], ...]]], 

1491 prefix: str, 

1492 delims: str, 

1493 extra_prefix: Optional[tuple[Union[str, bytes], ...]] = None, 

1494) -> tuple[str, int, dict[str, _DictKeyState]]: 

1495 """Used by dict_key_matches, matching the prefix to a list of keys 

1496 

1497 Parameters 

1498 ---------- 

1499 keys 

1500 list of keys in dictionary currently being completed. 

1501 prefix 

1502 Part of the text already typed by the user. E.g. `mydict[b'fo` 

1503 delims 

1504 String of delimiters to consider when finding the current key. 

1505 extra_prefix : optional 

1506 Part of the text already typed in multi-key index cases. E.g. for 

1507 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`. 

1508 

1509 Returns 

1510 ------- 

1511 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with 

1512 ``quote`` being the quote that need to be used to close current string. 

1513 ``token_start`` the position where the replacement should start occurring, 

1514 ``matches`` a dictionary of replacement/completion keys on keys and values 

1515 indicating whether the state. 

1516 """ 

1517 prefix_tuple = extra_prefix if extra_prefix else () 

1518 

1519 prefix_tuple_size = sum( 

1520 [ 

1521 # for pandas, do not count slices as taking space 

1522 not isinstance(k, slice) 

1523 for k in prefix_tuple 

1524 ] 

1525 ) 

1526 text_serializable_types = (str, bytes, int, float, slice) 

1527 

1528 def filter_prefix_tuple(key): 

1529 # Reject too short keys 

1530 if len(key) <= prefix_tuple_size: 

1531 return False 

1532 # Reject keys which cannot be serialised to text 

1533 for k in key: 

1534 if not isinstance(k, text_serializable_types): 

1535 return False 

1536 # Reject keys that do not match the prefix 

1537 for k, pt in zip(key, prefix_tuple): 

1538 if k != pt and not isinstance(pt, slice): 

1539 return False 

1540 # All checks passed! 

1541 return True 

1542 

1543 filtered_key_is_final: dict[ 

1544 Union[str, bytes, int, float], _DictKeyState 

1545 ] = defaultdict(lambda: _DictKeyState.BASELINE) 

1546 

1547 for k in keys: 

1548 # If at least one of the matches is not final, mark as undetermined. 

1549 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where 

1550 # `111` appears final on first match but is not final on the second. 

1551 

1552 if isinstance(k, tuple): 

1553 if filter_prefix_tuple(k): 

1554 key_fragment = k[prefix_tuple_size] 

1555 filtered_key_is_final[key_fragment] |= ( 

1556 _DictKeyState.END_OF_TUPLE 

1557 if len(k) == prefix_tuple_size + 1 

1558 else _DictKeyState.IN_TUPLE 

1559 ) 

1560 elif prefix_tuple_size > 0: 

1561 # we are completing a tuple but this key is not a tuple, 

1562 # so we should ignore it 

1563 pass 

1564 else: 

1565 if isinstance(k, text_serializable_types): 

1566 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM 

1567 

1568 filtered_keys = filtered_key_is_final.keys() 

1569 

1570 if not prefix: 

1571 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()} 

1572 

1573 quote_match = re.search("(?:\"|')", prefix) 

1574 is_user_prefix_numeric = False 

1575 

1576 if quote_match: 

1577 quote = quote_match.group() 

1578 valid_prefix = prefix + quote 

1579 try: 

1580 prefix_str = literal_eval(valid_prefix) 

1581 except Exception: 

1582 return "", 0, {} 

1583 else: 

1584 # If it does not look like a string, let's assume 

1585 # we are dealing with a number or variable. 

1586 number_match = _match_number_in_dict_key_prefix(prefix) 

1587 

1588 # We do not want the key matcher to suggest variable names so we yield: 

1589 if number_match is None: 

1590 # The alternative would be to assume that user forgort the quote 

1591 # and if the substring matches, suggest adding it at the start. 

1592 return "", 0, {} 

1593 

1594 prefix_str = number_match 

1595 is_user_prefix_numeric = True 

1596 quote = "" 

1597 

1598 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$' 

1599 token_match = re.search(pattern, prefix, re.UNICODE) 

1600 assert token_match is not None # silence mypy 

1601 token_start = token_match.start() 

1602 token_prefix = token_match.group() 

1603 

1604 matched: dict[str, _DictKeyState] = {} 

1605 

1606 str_key: Union[str, bytes] 

1607 

1608 for key in filtered_keys: 

1609 if isinstance(key, (int, float)): 

1610 # User typed a number but this key is not a number. 

1611 if not is_user_prefix_numeric: 

1612 continue 

1613 str_key = str(key) 

1614 if isinstance(key, int): 

1615 int_base = prefix_str[:2].lower() 

1616 # if user typed integer using binary/oct/hex notation: 

1617 if int_base in _INT_FORMATS: 

1618 int_format = _INT_FORMATS[int_base] 

1619 str_key = int_format(key) 

1620 else: 

1621 # User typed a string but this key is a number. 

1622 if is_user_prefix_numeric: 

1623 continue 

1624 str_key = key 

1625 try: 

1626 if not str_key.startswith(prefix_str): 

1627 continue 

1628 except (AttributeError, TypeError, UnicodeError): 

1629 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa 

1630 continue 

1631 

1632 # reformat remainder of key to begin with prefix 

1633 rem = str_key[len(prefix_str) :] 

1634 # force repr wrapped in ' 

1635 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"') 

1636 rem_repr = rem_repr[1 + rem_repr.index("'"):-2] 

1637 if quote == '"': 

1638 # The entered prefix is quoted with ", 

1639 # but the match is quoted with '. 

1640 # A contained " hence needs escaping for comparison: 

1641 rem_repr = rem_repr.replace('"', '\\"') 

1642 

1643 # then reinsert prefix from start of token 

1644 match = "%s%s" % (token_prefix, rem_repr) 

1645 

1646 matched[match] = filtered_key_is_final[key] 

1647 return quote, token_start, matched 

1648 

1649 

1650def cursor_to_position(text:str, line:int, column:int)->int: 

1651 """ 

1652 Convert the (line,column) position of the cursor in text to an offset in a 

1653 string. 

1654 

1655 Parameters 

1656 ---------- 

1657 text : str 

1658 The text in which to calculate the cursor offset 

1659 line : int 

1660 Line of the cursor; 0-indexed 

1661 column : int 

1662 Column of the cursor 0-indexed 

1663 

1664 Returns 

1665 ------- 

1666 Position of the cursor in ``text``, 0-indexed. 

1667 

1668 See Also 

1669 -------- 

1670 position_to_cursor : reciprocal of this function 

1671 

1672 """ 

1673 lines = text.split('\n') 

1674 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines))) 

1675 

1676 return sum(len(line) + 1 for line in lines[:line]) + column 

1677 

1678 

1679def position_to_cursor(text: str, offset: int) -> tuple[int, int]: 

1680 """ 

1681 Convert the position of the cursor in text (0 indexed) to a line 

1682 number(0-indexed) and a column number (0-indexed) pair 

1683 

1684 Position should be a valid position in ``text``. 

1685 

1686 Parameters 

1687 ---------- 

1688 text : str 

1689 The text in which to calculate the cursor offset 

1690 offset : int 

1691 Position of the cursor in ``text``, 0-indexed. 

1692 

1693 Returns 

1694 ------- 

1695 (line, column) : (int, int) 

1696 Line of the cursor; 0-indexed, column of the cursor 0-indexed 

1697 

1698 See Also 

1699 -------- 

1700 cursor_to_position : reciprocal of this function 

1701 

1702 """ 

1703 

1704 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text)) 

1705 

1706 before = text[:offset] 

1707 blines = before.split('\n') # ! splitnes trim trailing \n 

1708 line = before.count('\n') 

1709 col = len(blines[-1]) 

1710 return line, col 

1711 

1712 

1713def _safe_isinstance(obj, module, class_name, *attrs): 

1714 """Checks if obj is an instance of module.class_name if loaded 

1715 """ 

1716 if module in sys.modules: 

1717 m = sys.modules[module] 

1718 for attr in [class_name, *attrs]: 

1719 m = getattr(m, attr) 

1720 return isinstance(obj, m) 

1721 

1722 

1723@context_matcher() 

1724def back_unicode_name_matcher(context: CompletionContext): 

1725 """Match Unicode characters back to Unicode name 

1726 

1727 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API. 

1728 """ 

1729 fragment, matches = back_unicode_name_matches(context.text_until_cursor) 

1730 return _convert_matcher_v1_result_to_v2( 

1731 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

1732 ) 

1733 

1734 

1735def back_unicode_name_matches(text: str) -> tuple[str, Sequence[str]]: 

1736 """Match Unicode characters back to Unicode name 

1737 

1738 This does ``☃`` -> ``\\snowman`` 

1739 

1740 Note that snowman is not a valid python3 combining character but will be expanded. 

1741 Though it will not recombine back to the snowman character by the completion machinery. 

1742 

1743 This will not either back-complete standard sequences like \\n, \\b ... 

1744 

1745 .. deprecated:: 8.6 

1746 You can use :meth:`back_unicode_name_matcher` instead. 

1747 

1748 Returns 

1749 ======= 

1750 

1751 Return a tuple with two elements: 

1752 

1753 - The Unicode character that was matched (preceded with a backslash), or 

1754 empty string, 

1755 - a sequence (of 1), name for the match Unicode character, preceded by 

1756 backslash, or empty if no match. 

1757 """ 

1758 if len(text)<2: 

1759 return '', () 

1760 maybe_slash = text[-2] 

1761 if maybe_slash != '\\': 

1762 return '', () 

1763 

1764 char = text[-1] 

1765 # no expand on quote for completion in strings. 

1766 # nor backcomplete standard ascii keys 

1767 if char in string.ascii_letters or char in ('"',"'"): 

1768 return '', () 

1769 try : 

1770 unic = unicodedata.name(char) 

1771 return '\\'+char,('\\'+unic,) 

1772 except KeyError: 

1773 pass 

1774 return '', () 

1775 

1776 

1777@context_matcher() 

1778def back_latex_name_matcher(context: CompletionContext) -> SimpleMatcherResult: 

1779 """Match latex characters back to unicode name 

1780 

1781 This does ``\\ℵ`` -> ``\\aleph`` 

1782 """ 

1783 

1784 text = context.text_until_cursor 

1785 no_match = { 

1786 "completions": [], 

1787 "suppress": False, 

1788 } 

1789 

1790 if len(text)<2: 

1791 return no_match 

1792 maybe_slash = text[-2] 

1793 if maybe_slash != '\\': 

1794 return no_match 

1795 

1796 char = text[-1] 

1797 # no expand on quote for completion in strings. 

1798 # nor backcomplete standard ascii keys 

1799 if char in string.ascii_letters or char in ('"',"'"): 

1800 return no_match 

1801 try : 

1802 latex = reverse_latex_symbol[char] 

1803 # '\\' replace the \ as well 

1804 return { 

1805 "completions": [SimpleCompletion(text=latex, type="latex")], 

1806 "suppress": True, 

1807 "matched_fragment": "\\" + char, 

1808 } 

1809 except KeyError: 

1810 pass 

1811 

1812 return no_match 

1813 

1814def _formatparamchildren(parameter) -> str: 

1815 """ 

1816 Get parameter name and value from Jedi Private API 

1817 

1818 Jedi does not expose a simple way to get `param=value` from its API. 

1819 

1820 Parameters 

1821 ---------- 

1822 parameter 

1823 Jedi's function `Param` 

1824 

1825 Returns 

1826 ------- 

1827 A string like 'a', 'b=1', '*args', '**kwargs' 

1828 

1829 """ 

1830 description = parameter.description 

1831 if not description.startswith('param '): 

1832 raise ValueError('Jedi function parameter description have change format.' 

1833 'Expected "param ...", found %r".' % description) 

1834 return description[6:] 

1835 

1836def _make_signature(completion)-> str: 

1837 """ 

1838 Make the signature from a jedi completion 

1839 

1840 Parameters 

1841 ---------- 

1842 completion : jedi.Completion 

1843 object does not complete a function type 

1844 

1845 Returns 

1846 ------- 

1847 a string consisting of the function signature, with the parenthesis but 

1848 without the function name. example: 

1849 `(a, *args, b=1, **kwargs)` 

1850 

1851 """ 

1852 

1853 # it looks like this might work on jedi 0.17 

1854 if hasattr(completion, 'get_signatures'): 

1855 signatures = completion.get_signatures() 

1856 if not signatures: 

1857 return '(?)' 

1858 

1859 c0 = completion.get_signatures()[0] 

1860 return '('+c0.to_string().split('(', maxsplit=1)[1] 

1861 

1862 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures() 

1863 for p in signature.defined_names()) if f]) 

1864 

1865 

1866_CompleteResult = dict[str, MatcherResult] 

1867 

1868 

1869DICT_MATCHER_REGEX = re.compile( 

1870 r"""(?x) 

1871( # match dict-referring - or any get item object - expression 

1872 .+ 

1873) 

1874\[ # open bracket 

1875\s* # and optional whitespace 

1876# Capture any number of serializable objects (e.g. "a", "b", 'c') 

1877# and slices 

1878((?:(?: 

1879 (?: # closed string 

1880 [uUbB]? # string prefix (r not handled) 

1881 (?: 

1882 '(?:[^']|(?<!\\)\\')*' 

1883 | 

1884 "(?:[^"]|(?<!\\)\\")*" 

1885 ) 

1886 ) 

1887 | 

1888 # capture integers and slices 

1889 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2} 

1890 | 

1891 # integer in bin/hex/oct notation 

1892 0[bBxXoO]_?(?:\w|\d)+ 

1893 ) 

1894 \s*,\s* 

1895)*) 

1896((?: 

1897 (?: # unclosed string 

1898 [uUbB]? # string prefix (r not handled) 

1899 (?: 

1900 '(?:[^']|(?<!\\)\\')* 

1901 | 

1902 "(?:[^"]|(?<!\\)\\")* 

1903 ) 

1904 ) 

1905 | 

1906 # unfinished integer 

1907 (?:[-+]?\d+) 

1908 | 

1909 # integer in bin/hex/oct notation 

1910 0[bBxXoO]_?(?:\w|\d)+ 

1911 ) 

1912)? 

1913$ 

1914""" 

1915) 

1916 

1917 

1918def _convert_matcher_v1_result_to_v2_no_no( 

1919 matches: Sequence[str], 

1920 type: str, 

1921) -> SimpleMatcherResult: 

1922 """same as _convert_matcher_v1_result_to_v2 but fragment=None, and suppress_if_matches is False by construction""" 

1923 return SimpleMatcherResult( 

1924 completions=[SimpleCompletion(text=match, type=type) for match in matches], 

1925 suppress=False, 

1926 ) 

1927 

1928 

1929def _convert_matcher_v1_result_to_v2( 

1930 matches: Sequence[str], 

1931 type: str, 

1932 fragment: Optional[str] = None, 

1933 suppress_if_matches: bool = False, 

1934) -> SimpleMatcherResult: 

1935 """Utility to help with transition""" 

1936 result = { 

1937 "completions": [SimpleCompletion(text=match, type=type) for match in matches], 

1938 "suppress": (True if matches else False) if suppress_if_matches else False, 

1939 } 

1940 if fragment is not None: 

1941 result["matched_fragment"] = fragment 

1942 return cast(SimpleMatcherResult, result) 

1943 

1944 

1945class IPCompleter(Completer): 

1946 """Extension of the completer class with IPython-specific features""" 

1947 

1948 @observe('greedy') 

1949 def _greedy_changed(self, change): 

1950 """update the splitter and readline delims when greedy is changed""" 

1951 if change["new"]: 

1952 self.evaluation = "unsafe" 

1953 self.auto_close_dict_keys = True 

1954 self.splitter.delims = GREEDY_DELIMS 

1955 else: 

1956 self.evaluation = "limited" 

1957 self.auto_close_dict_keys = False 

1958 self.splitter.delims = DELIMS 

1959 

1960 dict_keys_only = Bool( 

1961 False, 

1962 help=""" 

1963 Whether to show dict key matches only. 

1964 

1965 (disables all matchers except for `IPCompleter.dict_key_matcher`). 

1966 """, 

1967 ) 

1968 

1969 suppress_competing_matchers = UnionTrait( 

1970 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))], 

1971 default_value=None, 

1972 help=""" 

1973 Whether to suppress completions from other *Matchers*. 

1974 

1975 When set to ``None`` (default) the matchers will attempt to auto-detect 

1976 whether suppression of other matchers is desirable. For example, at 

1977 the beginning of a line followed by `%` we expect a magic completion 

1978 to be the only applicable option, and after ``my_dict['`` we usually 

1979 expect a completion with an existing dictionary key. 

1980 

1981 If you want to disable this heuristic and see completions from all matchers, 

1982 set ``IPCompleter.suppress_competing_matchers = False``. 

1983 To disable the heuristic for specific matchers provide a dictionary mapping: 

1984 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``. 

1985 

1986 Set ``IPCompleter.suppress_competing_matchers = True`` to limit 

1987 completions to the set of matchers with the highest priority; 

1988 this is equivalent to ``IPCompleter.merge_completions`` and 

1989 can be beneficial for performance, but will sometimes omit relevant 

1990 candidates from matchers further down the priority list. 

1991 """, 

1992 ).tag(config=True) 

1993 

1994 merge_completions = Bool( 

1995 True, 

1996 help="""Whether to merge completion results into a single list 

1997 

1998 If False, only the completion results from the first non-empty 

1999 completer will be returned. 

2000 

2001 As of version 8.6.0, setting the value to ``False`` is an alias for: 

2002 ``IPCompleter.suppress_competing_matchers = True.``. 

2003 """, 

2004 ).tag(config=True) 

2005 

2006 disable_matchers = ListTrait( 

2007 Unicode(), 

2008 help="""List of matchers to disable. 

2009 

2010 The list should contain matcher identifiers (see :any:`completion_matcher`). 

2011 """, 

2012 ).tag(config=True) 

2013 

2014 omit__names = Enum( 

2015 (0, 1, 2), 

2016 default_value=2, 

2017 help="""Instruct the completer to omit private method names 

2018 

2019 Specifically, when completing on ``object.<tab>``. 

2020 

2021 When 2 [default]: all names that start with '_' will be excluded. 

2022 

2023 When 1: all 'magic' names (``__foo__``) will be excluded. 

2024 

2025 When 0: nothing will be excluded. 

2026 """ 

2027 ).tag(config=True) 

2028 limit_to__all__ = Bool(False, 

2029 help=""" 

2030 DEPRECATED as of version 5.0. 

2031 

2032 Instruct the completer to use __all__ for the completion 

2033 

2034 Specifically, when completing on ``object.<tab>``. 

2035 

2036 When True: only those names in obj.__all__ will be included. 

2037 

2038 When False [default]: the __all__ attribute is ignored 

2039 """, 

2040 ).tag(config=True) 

2041 

2042 profile_completions = Bool( 

2043 default_value=False, 

2044 help="If True, emit profiling data for completion subsystem using cProfile." 

2045 ).tag(config=True) 

2046 

2047 profiler_output_dir = Unicode( 

2048 default_value=".completion_profiles", 

2049 help="Template for path at which to output profile data for completions." 

2050 ).tag(config=True) 

2051 

2052 @observe('limit_to__all__') 

2053 def _limit_to_all_changed(self, change): 

2054 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration ' 

2055 'value has been deprecated since IPython 5.0, will be made to have ' 

2056 'no effects and then removed in future version of IPython.', 

2057 UserWarning) 

2058 

2059 def __init__( 

2060 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs 

2061 ): 

2062 """IPCompleter() -> completer 

2063 

2064 Return a completer object. 

2065 

2066 Parameters 

2067 ---------- 

2068 shell 

2069 a pointer to the ipython shell itself. This is needed 

2070 because this completer knows about magic functions, and those can 

2071 only be accessed via the ipython instance. 

2072 namespace : dict, optional 

2073 an optional dict where completions are performed. 

2074 global_namespace : dict, optional 

2075 secondary optional dict for completions, to 

2076 handle cases (such as IPython embedded inside functions) where 

2077 both Python scopes are visible. 

2078 config : Config 

2079 traitlet's config object 

2080 **kwargs 

2081 passed to super class unmodified. 

2082 """ 

2083 

2084 self.magic_escape = ESC_MAGIC 

2085 self.splitter = CompletionSplitter() 

2086 

2087 # _greedy_changed() depends on splitter and readline being defined: 

2088 super().__init__( 

2089 namespace=namespace, 

2090 global_namespace=global_namespace, 

2091 config=config, 

2092 **kwargs, 

2093 ) 

2094 

2095 # List where completion matches will be stored 

2096 self.matches = [] 

2097 self.shell = shell 

2098 # Regexp to split filenames with spaces in them 

2099 self.space_name_re = re.compile(r'([^\\] )') 

2100 # Hold a local ref. to glob.glob for speed 

2101 self.glob = glob.glob 

2102 

2103 # Determine if we are running on 'dumb' terminals, like (X)Emacs 

2104 # buffers, to avoid completion problems. 

2105 term = os.environ.get('TERM','xterm') 

2106 self.dumb_terminal = term in ['dumb','emacs'] 

2107 

2108 # Special handling of backslashes needed in win32 platforms 

2109 if sys.platform == "win32": 

2110 self.clean_glob = self._clean_glob_win32 

2111 else: 

2112 self.clean_glob = self._clean_glob 

2113 

2114 #regexp to parse docstring for function signature 

2115 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2116 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2117 #use this if positional argument name is also needed 

2118 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)') 

2119 

2120 self.magic_arg_matchers = [ 

2121 self.magic_config_matcher, 

2122 self.magic_color_matcher, 

2123 ] 

2124 

2125 # This is set externally by InteractiveShell 

2126 self.custom_completers = None 

2127 

2128 # This is a list of names of unicode characters that can be completed 

2129 # into their corresponding unicode value. The list is large, so we 

2130 # lazily initialize it on first use. Consuming code should access this 

2131 # attribute through the `@unicode_names` property. 

2132 self._unicode_names = None 

2133 

2134 self._backslash_combining_matchers = [ 

2135 self.latex_name_matcher, 

2136 self.unicode_name_matcher, 

2137 back_latex_name_matcher, 

2138 back_unicode_name_matcher, 

2139 self.fwd_unicode_matcher, 

2140 ] 

2141 

2142 if not self.backslash_combining_completions: 

2143 for matcher in self._backslash_combining_matchers: 

2144 self.disable_matchers.append(_get_matcher_id(matcher)) 

2145 

2146 if not self.merge_completions: 

2147 self.suppress_competing_matchers = True 

2148 

2149 @property 

2150 def matchers(self) -> list[Matcher]: 

2151 """All active matcher routines for completion""" 

2152 if self.dict_keys_only: 

2153 return [self.dict_key_matcher] 

2154 

2155 if self.use_jedi: 

2156 return [ 

2157 *self.custom_matchers, 

2158 *self._backslash_combining_matchers, 

2159 *self.magic_arg_matchers, 

2160 self.custom_completer_matcher, 

2161 self.magic_matcher, 

2162 self._jedi_matcher, 

2163 self.dict_key_matcher, 

2164 self.file_matcher, 

2165 ] 

2166 else: 

2167 return [ 

2168 *self.custom_matchers, 

2169 *self._backslash_combining_matchers, 

2170 *self.magic_arg_matchers, 

2171 self.custom_completer_matcher, 

2172 self.dict_key_matcher, 

2173 self.magic_matcher, 

2174 self.python_matcher, 

2175 self.file_matcher, 

2176 self.python_func_kw_matcher, 

2177 ] 

2178 

2179 def all_completions(self, text: str) -> list[str]: 

2180 """ 

2181 Wrapper around the completion methods for the benefit of emacs. 

2182 """ 

2183 prefix = text.rpartition('.')[0] 

2184 with provisionalcompleter(): 

2185 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text 

2186 for c in self.completions(text, len(text))] 

2187 

2188 return self.complete(text)[1] 

2189 

2190 def _clean_glob(self, text:str): 

2191 return self.glob("%s*" % text) 

2192 

2193 def _clean_glob_win32(self, text:str): 

2194 return [f.replace("\\","/") 

2195 for f in self.glob("%s*" % text)] 

2196 

2197 @context_matcher() 

2198 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2199 """Match filenames, expanding ~USER type strings. 

2200 

2201 Most of the seemingly convoluted logic in this completer is an 

2202 attempt to handle filenames with spaces in them. And yet it's not 

2203 quite perfect, because Python's readline doesn't expose all of the 

2204 GNU readline details needed for this to be done correctly. 

2205 

2206 For a filename with a space in it, the printed completions will be 

2207 only the parts after what's already been typed (instead of the 

2208 full completions, as is normally done). I don't think with the 

2209 current (as of Python 2.3) Python readline it's possible to do 

2210 better. 

2211 """ 

2212 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter, 

2213 # starts with `/home/`, `C:\`, etc) 

2214 

2215 text = context.token 

2216 

2217 # chars that require escaping with backslash - i.e. chars 

2218 # that readline treats incorrectly as delimiters, but we 

2219 # don't want to treat as delimiters in filename matching 

2220 # when escaped with backslash 

2221 if text.startswith('!'): 

2222 text = text[1:] 

2223 text_prefix = u'!' 

2224 else: 

2225 text_prefix = u'' 

2226 

2227 text_until_cursor = self.text_until_cursor 

2228 # track strings with open quotes 

2229 open_quotes = has_open_quotes(text_until_cursor) 

2230 

2231 if '(' in text_until_cursor or '[' in text_until_cursor: 

2232 lsplit = text 

2233 else: 

2234 try: 

2235 # arg_split ~ shlex.split, but with unicode bugs fixed by us 

2236 lsplit = arg_split(text_until_cursor)[-1] 

2237 except ValueError: 

2238 # typically an unmatched ", or backslash without escaped char. 

2239 if open_quotes: 

2240 lsplit = text_until_cursor.split(open_quotes)[-1] 

2241 else: 

2242 return { 

2243 "completions": [], 

2244 "suppress": False, 

2245 } 

2246 except IndexError: 

2247 # tab pressed on empty line 

2248 lsplit = "" 

2249 

2250 if not open_quotes and lsplit != protect_filename(lsplit): 

2251 # if protectables are found, do matching on the whole escaped name 

2252 has_protectables = True 

2253 text0,text = text,lsplit 

2254 else: 

2255 has_protectables = False 

2256 text = os.path.expanduser(text) 

2257 

2258 if text == "": 

2259 return { 

2260 "completions": [ 

2261 SimpleCompletion( 

2262 text=text_prefix + protect_filename(f), type="path" 

2263 ) 

2264 for f in self.glob("*") 

2265 ], 

2266 "suppress": False, 

2267 } 

2268 

2269 # Compute the matches from the filesystem 

2270 if sys.platform == 'win32': 

2271 m0 = self.clean_glob(text) 

2272 else: 

2273 m0 = self.clean_glob(text.replace('\\', '')) 

2274 

2275 if has_protectables: 

2276 # If we had protectables, we need to revert our changes to the 

2277 # beginning of filename so that we don't double-write the part 

2278 # of the filename we have so far 

2279 len_lsplit = len(lsplit) 

2280 matches = [text_prefix + text0 + 

2281 protect_filename(f[len_lsplit:]) for f in m0] 

2282 else: 

2283 if open_quotes: 

2284 # if we have a string with an open quote, we don't need to 

2285 # protect the names beyond the quote (and we _shouldn't_, as 

2286 # it would cause bugs when the filesystem call is made). 

2287 matches = m0 if sys.platform == "win32" else\ 

2288 [protect_filename(f, open_quotes) for f in m0] 

2289 else: 

2290 matches = [text_prefix + 

2291 protect_filename(f) for f in m0] 

2292 

2293 # Mark directories in input list by appending '/' to their names. 

2294 return { 

2295 "completions": [ 

2296 SimpleCompletion(text=x + "/" if os.path.isdir(x) else x, type="path") 

2297 for x in matches 

2298 ], 

2299 "suppress": False, 

2300 } 

2301 

2302 def _extract_code(self, line: str) -> str: 

2303 """Extract code from magics if any.""" 

2304 

2305 if not line: 

2306 return line 

2307 maybe_magic, *rest = line.split(maxsplit=1) 

2308 if not rest: 

2309 return line 

2310 args = rest[0] 

2311 known_magics = self.shell.magics_manager.lsmagic() 

2312 line_magics = known_magics["line"] 

2313 magic_name = maybe_magic.lstrip(self.magic_escape) 

2314 if magic_name not in line_magics: 

2315 return line 

2316 

2317 if not maybe_magic.startswith(self.magic_escape): 

2318 all_variables = [*self.namespace.keys(), *self.global_namespace.keys()] 

2319 if magic_name in all_variables: 

2320 # short circuit if we see a line starting with say `time` 

2321 # but time is defined as a variable (in addition to being 

2322 # a magic). In these cases users need to use explicit `%time`. 

2323 return line 

2324 

2325 magic_method = line_magics[magic_name] 

2326 

2327 try: 

2328 if magic_name == "timeit": 

2329 opts, stmt = magic_method.__self__.parse_options( 

2330 args, 

2331 "n:r:tcp:qov:", 

2332 posix=False, 

2333 strict=False, 

2334 preserve_non_opts=True, 

2335 ) 

2336 return stmt 

2337 elif magic_name == "prun": 

2338 opts, stmt = magic_method.__self__.parse_options( 

2339 args, "D:l:rs:T:q", list_all=True, posix=False 

2340 ) 

2341 return stmt 

2342 elif hasattr(magic_method, "parser") and getattr( 

2343 magic_method, "has_arguments", False 

2344 ): 

2345 # e.g. %debug, %time 

2346 args, extra = magic_method.parser.parse_argstring(args, partial=True) 

2347 return " ".join(extra) 

2348 except UsageError: 

2349 return line 

2350 

2351 return line 

2352 

2353 @context_matcher() 

2354 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2355 """Match magics.""" 

2356 

2357 # Get all shell magics now rather than statically, so magics loaded at 

2358 # runtime show up too. 

2359 text = context.token 

2360 lsm = self.shell.magics_manager.lsmagic() 

2361 line_magics = lsm['line'] 

2362 cell_magics = lsm['cell'] 

2363 pre = self.magic_escape 

2364 pre2 = pre + pre 

2365 

2366 explicit_magic = text.startswith(pre) 

2367 

2368 # Completion logic: 

2369 # - user gives %%: only do cell magics 

2370 # - user gives %: do both line and cell magics 

2371 # - no prefix: do both 

2372 # In other words, line magics are skipped if the user gives %% explicitly 

2373 # 

2374 # We also exclude magics that match any currently visible names: 

2375 # https://github.com/ipython/ipython/issues/4877, unless the user has 

2376 # typed a %: 

2377 # https://github.com/ipython/ipython/issues/10754 

2378 bare_text = text.lstrip(pre) 

2379 global_matches = self.global_matches(bare_text) 

2380 if not explicit_magic: 

2381 def matches(magic): 

2382 """ 

2383 Filter magics, in particular remove magics that match 

2384 a name present in global namespace. 

2385 """ 

2386 return ( magic.startswith(bare_text) and 

2387 magic not in global_matches ) 

2388 else: 

2389 def matches(magic): 

2390 return magic.startswith(bare_text) 

2391 

2392 completions = [pre2 + m for m in cell_magics if matches(m)] 

2393 if not text.startswith(pre2): 

2394 completions += [pre + m for m in line_magics if matches(m)] 

2395 

2396 is_magic_prefix = len(text) > 0 and text[0] == "%" 

2397 

2398 return { 

2399 "completions": [ 

2400 SimpleCompletion(text=comp, type="magic") for comp in completions 

2401 ], 

2402 "suppress": is_magic_prefix and len(completions) > 0, 

2403 } 

2404 

2405 @context_matcher() 

2406 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2407 """Match class names and attributes for %config magic.""" 

2408 # NOTE: uses `line_buffer` equivalent for compatibility 

2409 matches = self.magic_config_matches(context.line_with_cursor) 

2410 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2411 

2412 def magic_config_matches(self, text: str) -> list[str]: 

2413 """Match class names and attributes for %config magic. 

2414 

2415 .. deprecated:: 8.6 

2416 You can use :meth:`magic_config_matcher` instead. 

2417 """ 

2418 texts = text.strip().split() 

2419 

2420 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'): 

2421 # get all configuration classes 

2422 classes = sorted(set([ c for c in self.shell.configurables 

2423 if c.__class__.class_traits(config=True) 

2424 ]), key=lambda x: x.__class__.__name__) 

2425 classnames = [ c.__class__.__name__ for c in classes ] 

2426 

2427 # return all classnames if config or %config is given 

2428 if len(texts) == 1: 

2429 return classnames 

2430 

2431 # match classname 

2432 classname_texts = texts[1].split('.') 

2433 classname = classname_texts[0] 

2434 classname_matches = [ c for c in classnames 

2435 if c.startswith(classname) ] 

2436 

2437 # return matched classes or the matched class with attributes 

2438 if texts[1].find('.') < 0: 

2439 return classname_matches 

2440 elif len(classname_matches) == 1 and \ 

2441 classname_matches[0] == classname: 

2442 cls = classes[classnames.index(classname)].__class__ 

2443 help = cls.class_get_help() 

2444 # strip leading '--' from cl-args: 

2445 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help) 

2446 return [ attr.split('=')[0] 

2447 for attr in help.strip().splitlines() 

2448 if attr.startswith(texts[1]) ] 

2449 return [] 

2450 

2451 @context_matcher() 

2452 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2453 """Match color schemes for %colors magic.""" 

2454 text = context.line_with_cursor 

2455 texts = text.split() 

2456 if text.endswith(' '): 

2457 # .split() strips off the trailing whitespace. Add '' back 

2458 # so that: '%colors ' -> ['%colors', ''] 

2459 texts.append('') 

2460 

2461 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'): 

2462 prefix = texts[1] 

2463 return SimpleMatcherResult( 

2464 completions=[ 

2465 SimpleCompletion(color, type="param") 

2466 for color in theme_table.keys() 

2467 if color.startswith(prefix) 

2468 ], 

2469 suppress=False, 

2470 ) 

2471 return SimpleMatcherResult( 

2472 completions=[], 

2473 suppress=False, 

2474 ) 

2475 

2476 @context_matcher(identifier="IPCompleter.jedi_matcher") 

2477 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult: 

2478 matches = self._jedi_matches( 

2479 cursor_column=context.cursor_position, 

2480 cursor_line=context.cursor_line, 

2481 text=context.full_text, 

2482 ) 

2483 return { 

2484 "completions": matches, 

2485 # static analysis should not suppress other matchers 

2486 "suppress": {_get_matcher_id(self.file_matcher)} if matches else False, 

2487 } 

2488 

2489 def _jedi_matches( 

2490 self, cursor_column: int, cursor_line: int, text: str 

2491 ) -> Iterator[_JediCompletionLike]: 

2492 """ 

2493 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and 

2494 cursor position. 

2495 

2496 Parameters 

2497 ---------- 

2498 cursor_column : int 

2499 column position of the cursor in ``text``, 0-indexed. 

2500 cursor_line : int 

2501 line position of the cursor in ``text``, 0-indexed 

2502 text : str 

2503 text to complete 

2504 

2505 Notes 

2506 ----- 

2507 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion` 

2508 object containing a string with the Jedi debug information attached. 

2509 

2510 .. deprecated:: 8.6 

2511 You can use :meth:`_jedi_matcher` instead. 

2512 """ 

2513 namespaces = [self.namespace] 

2514 if self.global_namespace is not None: 

2515 namespaces.append(self.global_namespace) 

2516 

2517 completion_filter = lambda x:x 

2518 offset = cursor_to_position(text, cursor_line, cursor_column) 

2519 # filter output if we are completing for object members 

2520 if offset: 

2521 pre = text[offset-1] 

2522 if pre == '.': 

2523 if self.omit__names == 2: 

2524 completion_filter = lambda c:not c.name.startswith('_') 

2525 elif self.omit__names == 1: 

2526 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__')) 

2527 elif self.omit__names == 0: 

2528 completion_filter = lambda x:x 

2529 else: 

2530 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names)) 

2531 

2532 interpreter = jedi.Interpreter(text[:offset], namespaces) 

2533 try_jedi = True 

2534 

2535 try: 

2536 # find the first token in the current tree -- if it is a ' or " then we are in a string 

2537 completing_string = False 

2538 try: 

2539 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value')) 

2540 except StopIteration: 

2541 pass 

2542 else: 

2543 # note the value may be ', ", or it may also be ''' or """, or 

2544 # in some cases, """what/you/typed..., but all of these are 

2545 # strings. 

2546 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'} 

2547 

2548 # if we are in a string jedi is likely not the right candidate for 

2549 # now. Skip it. 

2550 try_jedi = not completing_string 

2551 except Exception as e: 

2552 # many of things can go wrong, we are using private API just don't crash. 

2553 if self.debug: 

2554 print("Error detecting if completing a non-finished string :", e, '|') 

2555 

2556 if not try_jedi: 

2557 return iter([]) 

2558 try: 

2559 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1)) 

2560 except Exception as e: 

2561 if self.debug: 

2562 return iter( 

2563 [ 

2564 _FakeJediCompletion( 

2565 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' 

2566 % (e) 

2567 ) 

2568 ] 

2569 ) 

2570 else: 

2571 return iter([]) 

2572 

2573 class _CompletionContextType(enum.Enum): 

2574 ATTRIBUTE = "attribute" # For attribute completion 

2575 GLOBAL = "global" # For global completion 

2576 

2577 def _determine_completion_context(self, line): 

2578 """ 

2579 Determine whether the cursor is in an attribute or global completion context. 

2580 """ 

2581 # Cursor in string/comment → GLOBAL. 

2582 is_string, is_in_expression = self._is_in_string_or_comment(line) 

2583 if is_string and not is_in_expression: 

2584 return self._CompletionContextType.GLOBAL 

2585 

2586 # If we're in a template string expression, handle specially 

2587 if is_string and is_in_expression: 

2588 # Extract the expression part - look for the last { that isn't closed 

2589 expr_start = line.rfind("{") 

2590 if expr_start >= 0: 

2591 # We're looking at the expression inside a template string 

2592 expr = line[expr_start + 1 :] 

2593 # Recursively determine the context of the expression 

2594 return self._determine_completion_context(expr) 

2595 

2596 # Handle plain number literals - should be global context 

2597 # Ex: 3. -42.14 but not 3.1. 

2598 if re.search(r"(?<!\w)(?<!\d\.)([-+]?\d+\.(\d+)?)(?!\w)$", line): 

2599 return self._CompletionContextType.GLOBAL 

2600 

2601 # Handle all other attribute matches np.ran, d[0].k, (a,b).count 

2602 chain_match = re.search(r".*(.+\.(?:[a-zA-Z]\w*)?)$", line) 

2603 if chain_match: 

2604 return self._CompletionContextType.ATTRIBUTE 

2605 

2606 return self._CompletionContextType.GLOBAL 

2607 

2608 def _is_in_string_or_comment(self, text): 

2609 """ 

2610 Determine if the cursor is inside a string or comment. 

2611 Returns (is_string, is_in_expression) tuple: 

2612 - is_string: True if in any kind of string 

2613 - is_in_expression: True if inside an f-string/t-string expression 

2614 """ 

2615 in_single_quote = False 

2616 in_double_quote = False 

2617 in_triple_single = False 

2618 in_triple_double = False 

2619 in_template_string = False # Covers both f-strings and t-strings 

2620 in_expression = False # For expressions in f/t-strings 

2621 expression_depth = 0 # Track nested braces in expressions 

2622 i = 0 

2623 

2624 while i < len(text): 

2625 # Check for f-string or t-string start 

2626 if ( 

2627 i + 1 < len(text) 

2628 and text[i] in ("f", "t") 

2629 and (text[i + 1] == '"' or text[i + 1] == "'") 

2630 and not ( 

2631 in_single_quote 

2632 or in_double_quote 

2633 or in_triple_single 

2634 or in_triple_double 

2635 ) 

2636 ): 

2637 in_template_string = True 

2638 i += 1 # Skip the 'f' or 't' 

2639 

2640 # Handle triple quotes 

2641 if i + 2 < len(text): 

2642 if ( 

2643 text[i : i + 3] == '"""' 

2644 and not in_single_quote 

2645 and not in_triple_single 

2646 ): 

2647 in_triple_double = not in_triple_double 

2648 if not in_triple_double: 

2649 in_template_string = False 

2650 i += 3 

2651 continue 

2652 if ( 

2653 text[i : i + 3] == "'''" 

2654 and not in_double_quote 

2655 and not in_triple_double 

2656 ): 

2657 in_triple_single = not in_triple_single 

2658 if not in_triple_single: 

2659 in_template_string = False 

2660 i += 3 

2661 continue 

2662 

2663 # Handle escapes 

2664 if text[i] == "\\" and i + 1 < len(text): 

2665 i += 2 

2666 continue 

2667 

2668 # Handle nested braces within f-strings 

2669 if in_template_string: 

2670 # Special handling for consecutive opening braces 

2671 if i + 1 < len(text) and text[i : i + 2] == "{{": 

2672 i += 2 

2673 continue 

2674 

2675 # Detect start of an expression 

2676 if text[i] == "{": 

2677 # Only increment depth and mark as expression if not already in an expression 

2678 # or if we're at a top-level nested brace 

2679 if not in_expression or (in_expression and expression_depth == 0): 

2680 in_expression = True 

2681 expression_depth += 1 

2682 i += 1 

2683 continue 

2684 

2685 # Detect end of an expression 

2686 if text[i] == "}": 

2687 expression_depth -= 1 

2688 if expression_depth <= 0: 

2689 in_expression = False 

2690 expression_depth = 0 

2691 i += 1 

2692 continue 

2693 

2694 in_triple_quote = in_triple_single or in_triple_double 

2695 

2696 # Handle quotes - also reset template string when closing quotes are encountered 

2697 if text[i] == '"' and not in_single_quote and not in_triple_quote: 

2698 in_double_quote = not in_double_quote 

2699 if not in_double_quote and not in_triple_quote: 

2700 in_template_string = False 

2701 elif text[i] == "'" and not in_double_quote and not in_triple_quote: 

2702 in_single_quote = not in_single_quote 

2703 if not in_single_quote and not in_triple_quote: 

2704 in_template_string = False 

2705 

2706 # Check for comment 

2707 if text[i] == "#" and not ( 

2708 in_single_quote or in_double_quote or in_triple_quote 

2709 ): 

2710 return True, False 

2711 

2712 i += 1 

2713 

2714 is_string = ( 

2715 in_single_quote or in_double_quote or in_triple_single or in_triple_double 

2716 ) 

2717 

2718 # Return tuple (is_string, is_in_expression) 

2719 return ( 

2720 is_string or (in_template_string and not in_expression), 

2721 in_expression and expression_depth > 0, 

2722 ) 

2723 

2724 @context_matcher() 

2725 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2726 """Match attributes or global python names""" 

2727 text = context.text_until_cursor 

2728 text = self._extract_code(text) 

2729 completion_type = self._determine_completion_context(text) 

2730 if completion_type == self._CompletionContextType.ATTRIBUTE: 

2731 try: 

2732 matches, fragment = self._attr_matches( 

2733 text, include_prefix=False, context=context 

2734 ) 

2735 if text.endswith(".") and self.omit__names: 

2736 if self.omit__names == 1: 

2737 # true if txt is _not_ a __ name, false otherwise: 

2738 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None 

2739 else: 

2740 # true if txt is _not_ a _ name, false otherwise: 

2741 no__name = ( 

2742 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :]) 

2743 is None 

2744 ) 

2745 matches = filter(no__name, matches) 

2746 matches = _convert_matcher_v1_result_to_v2( 

2747 matches, type="attribute", fragment=fragment 

2748 ) 

2749 if matches["completions"]: 

2750 matches["suppress"] = {_get_matcher_id(self.file_matcher)} 

2751 return matches 

2752 except NameError: 

2753 # catches <undefined attributes>.<tab> 

2754 return SimpleMatcherResult(completions=[], suppress=False) 

2755 else: 

2756 try: 

2757 matches = self.global_matches(context.token, context=context) 

2758 except TypeError: 

2759 matches = self.global_matches(context.token) 

2760 # TODO: maybe distinguish between functions, modules and just "variables" 

2761 return SimpleMatcherResult( 

2762 completions=[ 

2763 SimpleCompletion(text=match, type="variable") for match in matches 

2764 ], 

2765 suppress=False, 

2766 ) 

2767 

2768 @completion_matcher(api_version=1) 

2769 def python_matches(self, text: str) -> Iterable[str]: 

2770 """Match attributes or global python names. 

2771 

2772 .. deprecated:: 8.27 

2773 You can use :meth:`python_matcher` instead.""" 

2774 if "." in text: 

2775 try: 

2776 matches = self.attr_matches(text) 

2777 if text.endswith('.') and self.omit__names: 

2778 if self.omit__names == 1: 

2779 # true if txt is _not_ a __ name, false otherwise: 

2780 no__name = (lambda txt: 

2781 re.match(r'.*\.__.*?__',txt) is None) 

2782 else: 

2783 # true if txt is _not_ a _ name, false otherwise: 

2784 no__name = (lambda txt: 

2785 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None) 

2786 matches = filter(no__name, matches) 

2787 except NameError: 

2788 # catches <undefined attributes>.<tab> 

2789 matches = [] 

2790 else: 

2791 matches = self.global_matches(text) 

2792 return matches 

2793 

2794 def _default_arguments_from_docstring(self, doc): 

2795 """Parse the first line of docstring for call signature. 

2796 

2797 Docstring should be of the form 'min(iterable[, key=func])\n'. 

2798 It can also parse cython docstring of the form 

2799 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'. 

2800 """ 

2801 if doc is None: 

2802 return [] 

2803 

2804 #care only the firstline 

2805 line = doc.lstrip().splitlines()[0] 

2806 

2807 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2808 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]' 

2809 sig = self.docstring_sig_re.search(line) 

2810 if sig is None: 

2811 return [] 

2812 # iterable[, key=func]' -> ['iterable[' ,' key=func]'] 

2813 sig = sig.groups()[0].split(',') 

2814 ret = [] 

2815 for s in sig: 

2816 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2817 ret += self.docstring_kwd_re.findall(s) 

2818 return ret 

2819 

2820 def _default_arguments(self, obj): 

2821 """Return the list of default arguments of obj if it is callable, 

2822 or empty list otherwise.""" 

2823 call_obj = obj 

2824 ret = [] 

2825 if inspect.isbuiltin(obj): 

2826 pass 

2827 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)): 

2828 if inspect.isclass(obj): 

2829 #for cython embedsignature=True the constructor docstring 

2830 #belongs to the object itself not __init__ 

2831 ret += self._default_arguments_from_docstring( 

2832 getattr(obj, '__doc__', '')) 

2833 # for classes, check for __init__,__new__ 

2834 call_obj = (getattr(obj, '__init__', None) or 

2835 getattr(obj, '__new__', None)) 

2836 # for all others, check if they are __call__able 

2837 elif hasattr(obj, '__call__'): 

2838 call_obj = obj.__call__ 

2839 ret += self._default_arguments_from_docstring( 

2840 getattr(call_obj, '__doc__', '')) 

2841 

2842 _keeps = (inspect.Parameter.KEYWORD_ONLY, 

2843 inspect.Parameter.POSITIONAL_OR_KEYWORD) 

2844 

2845 try: 

2846 sig = inspect.signature(obj) 

2847 ret.extend(k for k, v in sig.parameters.items() if 

2848 v.kind in _keeps) 

2849 except ValueError: 

2850 pass 

2851 

2852 return list(set(ret)) 

2853 

2854 @context_matcher() 

2855 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2856 """Match named parameters (kwargs) of the last open function.""" 

2857 matches = self.python_func_kw_matches(context.token) 

2858 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2859 

2860 def python_func_kw_matches(self, text): 

2861 """Match named parameters (kwargs) of the last open function. 

2862 

2863 .. deprecated:: 8.6 

2864 You can use :meth:`python_func_kw_matcher` instead. 

2865 """ 

2866 

2867 if "." in text: # a parameter cannot be dotted 

2868 return [] 

2869 try: regexp = self.__funcParamsRegex 

2870 except AttributeError: 

2871 regexp = self.__funcParamsRegex = re.compile(r''' 

2872 '.*?(?<!\\)' | # single quoted strings or 

2873 ".*?(?<!\\)" | # double quoted strings or 

2874 \w+ | # identifier 

2875 \S # other characters 

2876 ''', re.VERBOSE | re.DOTALL) 

2877 # 1. find the nearest identifier that comes before an unclosed 

2878 # parenthesis before the cursor 

2879 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo" 

2880 tokens = regexp.findall(self.text_until_cursor) 

2881 iterTokens = reversed(tokens) 

2882 openPar = 0 

2883 

2884 for token in iterTokens: 

2885 if token == ')': 

2886 openPar -= 1 

2887 elif token == '(': 

2888 openPar += 1 

2889 if openPar > 0: 

2890 # found the last unclosed parenthesis 

2891 break 

2892 else: 

2893 return [] 

2894 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" ) 

2895 ids = [] 

2896 isId = re.compile(r'\w+$').match 

2897 

2898 while True: 

2899 try: 

2900 ids.append(next(iterTokens)) 

2901 if not isId(ids[-1]): 

2902 ids.pop() 

2903 break 

2904 if not next(iterTokens) == '.': 

2905 break 

2906 except StopIteration: 

2907 break 

2908 

2909 # Find all named arguments already assigned to, as to avoid suggesting 

2910 # them again 

2911 usedNamedArgs = set() 

2912 par_level = -1 

2913 for token, next_token in zip(tokens, tokens[1:]): 

2914 if token == '(': 

2915 par_level += 1 

2916 elif token == ')': 

2917 par_level -= 1 

2918 

2919 if par_level != 0: 

2920 continue 

2921 

2922 if next_token != '=': 

2923 continue 

2924 

2925 usedNamedArgs.add(token) 

2926 

2927 argMatches = [] 

2928 try: 

2929 callableObj = '.'.join(ids[::-1]) 

2930 namedArgs = self._default_arguments(eval(callableObj, 

2931 self.namespace)) 

2932 

2933 # Remove used named arguments from the list, no need to show twice 

2934 for namedArg in set(namedArgs) - usedNamedArgs: 

2935 if namedArg.startswith(text): 

2936 argMatches.append("%s=" %namedArg) 

2937 except: 

2938 pass 

2939 

2940 return argMatches 

2941 

2942 @staticmethod 

2943 def _get_keys(obj: Any) -> list[Any]: 

2944 # Objects can define their own completions by defining an 

2945 # _ipy_key_completions_() method. 

2946 method = get_real_method(obj, '_ipython_key_completions_') 

2947 if method is not None: 

2948 return method() 

2949 

2950 # Special case some common in-memory dict-like types 

2951 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"): 

2952 try: 

2953 return list(obj.keys()) 

2954 except Exception: 

2955 return [] 

2956 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"): 

2957 try: 

2958 return list(obj.obj.keys()) 

2959 except Exception: 

2960 return [] 

2961 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\ 

2962 _safe_isinstance(obj, 'numpy', 'void'): 

2963 return obj.dtype.names or [] 

2964 return [] 

2965 

2966 @context_matcher() 

2967 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2968 """Match string keys in a dictionary, after e.g. ``foo[``.""" 

2969 matches = self.dict_key_matches(context.token) 

2970 return _convert_matcher_v1_result_to_v2( 

2971 matches, type="dict key", suppress_if_matches=True 

2972 ) 

2973 

2974 def dict_key_matches(self, text: str) -> list[str]: 

2975 """Match string keys in a dictionary, after e.g. ``foo[``. 

2976 

2977 .. deprecated:: 8.6 

2978 You can use :meth:`dict_key_matcher` instead. 

2979 """ 

2980 

2981 # Short-circuit on closed dictionary (regular expression would 

2982 # not match anyway, but would take quite a while). 

2983 if self.text_until_cursor.strip().endswith("]"): 

2984 return [] 

2985 

2986 match = DICT_MATCHER_REGEX.search(self.text_until_cursor) 

2987 

2988 if match is None: 

2989 return [] 

2990 

2991 expr, prior_tuple_keys, key_prefix = match.groups() 

2992 

2993 obj = self._evaluate_expr(expr) 

2994 

2995 if obj is not_found: 

2996 return [] 

2997 

2998 keys = self._get_keys(obj) 

2999 if not keys: 

3000 return keys 

3001 

3002 tuple_prefix = guarded_eval( 

3003 prior_tuple_keys, 

3004 EvaluationContext( 

3005 globals=self.global_namespace, 

3006 locals=self.namespace, 

3007 evaluation=self.evaluation, # type: ignore 

3008 in_subscript=True, 

3009 auto_import=self._auto_import, 

3010 policy_overrides=self.policy_overrides, 

3011 ), 

3012 ) 

3013 

3014 closing_quote, token_offset, matches = match_dict_keys( 

3015 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix 

3016 ) 

3017 if not matches: 

3018 return [] 

3019 

3020 # get the cursor position of 

3021 # - the text being completed 

3022 # - the start of the key text 

3023 # - the start of the completion 

3024 text_start = len(self.text_until_cursor) - len(text) 

3025 if key_prefix: 

3026 key_start = match.start(3) 

3027 completion_start = key_start + token_offset 

3028 else: 

3029 key_start = completion_start = match.end() 

3030 

3031 # grab the leading prefix, to make sure all completions start with `text` 

3032 if text_start > key_start: 

3033 leading = '' 

3034 else: 

3035 leading = text[text_start:completion_start] 

3036 

3037 # append closing quote and bracket as appropriate 

3038 # this is *not* appropriate if the opening quote or bracket is outside 

3039 # the text given to this method, e.g. `d["""a\nt 

3040 can_close_quote = False 

3041 can_close_bracket = False 

3042 

3043 continuation = self.line_buffer[len(self.text_until_cursor) :].strip() 

3044 

3045 if continuation.startswith(closing_quote): 

3046 # do not close if already closed, e.g. `d['a<tab>'` 

3047 continuation = continuation[len(closing_quote) :] 

3048 else: 

3049 can_close_quote = True 

3050 

3051 continuation = continuation.strip() 

3052 

3053 # e.g. `pandas.DataFrame` has different tuple indexer behaviour, 

3054 # handling it is out of scope, so let's avoid appending suffixes. 

3055 has_known_tuple_handling = isinstance(obj, dict) 

3056 

3057 can_close_bracket = ( 

3058 not continuation.startswith("]") and self.auto_close_dict_keys 

3059 ) 

3060 can_close_tuple_item = ( 

3061 not continuation.startswith(",") 

3062 and has_known_tuple_handling 

3063 and self.auto_close_dict_keys 

3064 ) 

3065 can_close_quote = can_close_quote and self.auto_close_dict_keys 

3066 

3067 # fast path if closing quote should be appended but not suffix is allowed 

3068 if not can_close_quote and not can_close_bracket and closing_quote: 

3069 return [leading + k for k in matches] 

3070 

3071 results = [] 

3072 

3073 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM 

3074 

3075 for k, state_flag in matches.items(): 

3076 result = leading + k 

3077 if can_close_quote and closing_quote: 

3078 result += closing_quote 

3079 

3080 if state_flag == end_of_tuple_or_item: 

3081 # We do not know which suffix to add, 

3082 # e.g. both tuple item and string 

3083 # match this item. 

3084 pass 

3085 

3086 if state_flag in end_of_tuple_or_item and can_close_bracket: 

3087 result += "]" 

3088 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item: 

3089 result += ", " 

3090 results.append(result) 

3091 return results 

3092 

3093 @context_matcher() 

3094 def unicode_name_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

3095 """Match Latex-like syntax for unicode characters base 

3096 on the name of the character. 

3097 

3098 This does ``\\GREEK SMALL LETTER ETA`` -> ``η`` 

3099 

3100 Works only on valid python 3 identifier, or on combining characters that 

3101 will combine to form a valid identifier. 

3102 """ 

3103 

3104 text = context.text_until_cursor 

3105 

3106 slashpos = text.rfind('\\') 

3107 if slashpos > -1: 

3108 s = text[slashpos+1:] 

3109 try : 

3110 unic = unicodedata.lookup(s) 

3111 # allow combining chars 

3112 if ('a'+unic).isidentifier(): 

3113 return { 

3114 "completions": [SimpleCompletion(text=unic, type="unicode")], 

3115 "suppress": True, 

3116 "matched_fragment": "\\" + s, 

3117 } 

3118 except KeyError: 

3119 pass 

3120 return { 

3121 "completions": [], 

3122 "suppress": False, 

3123 } 

3124 

3125 @context_matcher() 

3126 def latex_name_matcher(self, context: CompletionContext): 

3127 """Match Latex syntax for unicode characters. 

3128 

3129 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3130 """ 

3131 fragment, matches = self.latex_matches(context.text_until_cursor) 

3132 return _convert_matcher_v1_result_to_v2( 

3133 matches, type="latex", fragment=fragment, suppress_if_matches=True 

3134 ) 

3135 

3136 def latex_matches(self, text: str) -> tuple[str, Sequence[str]]: 

3137 """Match Latex syntax for unicode characters. 

3138 

3139 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3140 

3141 .. deprecated:: 8.6 

3142 You can use :meth:`latex_name_matcher` instead. 

3143 """ 

3144 slashpos = text.rfind('\\') 

3145 if slashpos > -1: 

3146 s = text[slashpos:] 

3147 if s in latex_symbols: 

3148 # Try to complete a full latex symbol to unicode 

3149 # \\alpha -> α 

3150 return s, [latex_symbols[s]] 

3151 else: 

3152 # If a user has partially typed a latex symbol, give them 

3153 # a full list of options \al -> [\aleph, \alpha] 

3154 matches = [k for k in latex_symbols if k.startswith(s)] 

3155 if matches: 

3156 return s, matches 

3157 return '', () 

3158 

3159 @context_matcher() 

3160 def custom_completer_matcher(self, context): 

3161 """Dispatch custom completer. 

3162 

3163 If a match is found, suppresses all other matchers except for Jedi. 

3164 """ 

3165 matches = self.dispatch_custom_completer(context.token) or [] 

3166 result = _convert_matcher_v1_result_to_v2( 

3167 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True 

3168 ) 

3169 result["ordered"] = True 

3170 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)} 

3171 return result 

3172 

3173 def dispatch_custom_completer(self, text): 

3174 """ 

3175 .. deprecated:: 8.6 

3176 You can use :meth:`custom_completer_matcher` instead. 

3177 """ 

3178 if not self.custom_completers: 

3179 return 

3180 

3181 line = self.line_buffer 

3182 if not line.strip(): 

3183 return None 

3184 

3185 # Create a little structure to pass all the relevant information about 

3186 # the current completion to any custom completer. 

3187 event = SimpleNamespace() 

3188 event.line = line 

3189 event.symbol = text 

3190 cmd = line.split(None,1)[0] 

3191 event.command = cmd 

3192 event.text_until_cursor = self.text_until_cursor 

3193 

3194 # for foo etc, try also to find completer for %foo 

3195 if not cmd.startswith(self.magic_escape): 

3196 try_magic = self.custom_completers.s_matches( 

3197 self.magic_escape + cmd) 

3198 else: 

3199 try_magic = [] 

3200 

3201 for c in itertools.chain(self.custom_completers.s_matches(cmd), 

3202 try_magic, 

3203 self.custom_completers.flat_matches(self.text_until_cursor)): 

3204 try: 

3205 res = c(event) 

3206 if res: 

3207 # first, try case sensitive match 

3208 withcase = [r for r in res if r.startswith(text)] 

3209 if withcase: 

3210 return withcase 

3211 # if none, then case insensitive ones are ok too 

3212 text_low = text.lower() 

3213 return [r for r in res if r.lower().startswith(text_low)] 

3214 except TryNext: 

3215 pass 

3216 except KeyboardInterrupt: 

3217 """ 

3218 If custom completer take too long, 

3219 let keyboard interrupt abort and return nothing. 

3220 """ 

3221 break 

3222 

3223 return None 

3224 

3225 def completions(self, text: str, offset: int)->Iterator[Completion]: 

3226 """ 

3227 Returns an iterator over the possible completions 

3228 

3229 .. warning:: 

3230 

3231 Unstable 

3232 

3233 This function is unstable, API may change without warning. 

3234 It will also raise unless use in proper context manager. 

3235 

3236 Parameters 

3237 ---------- 

3238 text : str 

3239 Full text of the current input, multi line string. 

3240 offset : int 

3241 Integer representing the position of the cursor in ``text``. Offset 

3242 is 0-based indexed. 

3243 

3244 Yields 

3245 ------ 

3246 Completion 

3247 

3248 Notes 

3249 ----- 

3250 The cursor on a text can either be seen as being "in between" 

3251 characters or "On" a character depending on the interface visible to 

3252 the user. For consistency the cursor being on "in between" characters X 

3253 and Y is equivalent to the cursor being "on" character Y, that is to say 

3254 the character the cursor is on is considered as being after the cursor. 

3255 

3256 Combining characters may span more that one position in the 

3257 text. 

3258 

3259 .. note:: 

3260 

3261 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--`` 

3262 fake Completion token to distinguish completion returned by Jedi 

3263 and usual IPython completion. 

3264 

3265 .. note:: 

3266 

3267 Completions are not completely deduplicated yet. If identical 

3268 completions are coming from different sources this function does not 

3269 ensure that each completion object will only be present once. 

3270 """ 

3271 warnings.warn("_complete is a provisional API (as of IPython 6.0). " 

3272 "It may change without warnings. " 

3273 "Use in corresponding context manager.", 

3274 category=ProvisionalCompleterWarning, stacklevel=2) 

3275 

3276 seen = set() 

3277 profiler:Optional[cProfile.Profile] 

3278 try: 

3279 if self.profile_completions: 

3280 import cProfile 

3281 profiler = cProfile.Profile() 

3282 profiler.enable() 

3283 else: 

3284 profiler = None 

3285 

3286 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000): 

3287 if c and (c in seen): 

3288 continue 

3289 yield c 

3290 seen.add(c) 

3291 except KeyboardInterrupt: 

3292 """if completions take too long and users send keyboard interrupt, 

3293 do not crash and return ASAP. """ 

3294 pass 

3295 finally: 

3296 if profiler is not None: 

3297 profiler.disable() 

3298 ensure_dir_exists(self.profiler_output_dir) 

3299 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4())) 

3300 print("Writing profiler output to", output_path) 

3301 profiler.dump_stats(output_path) 

3302 

3303 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]: 

3304 """ 

3305 Core completion module.Same signature as :any:`completions`, with the 

3306 extra `timeout` parameter (in seconds). 

3307 

3308 Computing jedi's completion ``.type`` can be quite expensive (it is a 

3309 lazy property) and can require some warm-up, more warm up than just 

3310 computing the ``name`` of a completion. The warm-up can be : 

3311 

3312 - Long warm-up the first time a module is encountered after 

3313 install/update: actually build parse/inference tree. 

3314 

3315 - first time the module is encountered in a session: load tree from 

3316 disk. 

3317 

3318 We don't want to block completions for tens of seconds so we give the 

3319 completer a "budget" of ``_timeout`` seconds per invocation to compute 

3320 completions types, the completions that have not yet been computed will 

3321 be marked as "unknown" an will have a chance to be computed next round 

3322 are things get cached. 

3323 

3324 Keep in mind that Jedi is not the only thing treating the completion so 

3325 keep the timeout short-ish as if we take more than 0.3 second we still 

3326 have lots of processing to do. 

3327 

3328 """ 

3329 deadline = time.monotonic() + _timeout 

3330 

3331 before = full_text[:offset] 

3332 cursor_line, cursor_column = position_to_cursor(full_text, offset) 

3333 

3334 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3335 

3336 def is_non_jedi_result( 

3337 result: MatcherResult, identifier: str 

3338 ) -> TypeGuard[SimpleMatcherResult]: 

3339 return identifier != jedi_matcher_id 

3340 

3341 results = self._complete( 

3342 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column 

3343 ) 

3344 

3345 non_jedi_results: dict[str, SimpleMatcherResult] = { 

3346 identifier: result 

3347 for identifier, result in results.items() 

3348 if is_non_jedi_result(result, identifier) 

3349 } 

3350 

3351 jedi_matches = ( 

3352 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"] 

3353 if jedi_matcher_id in results 

3354 else () 

3355 ) 

3356 

3357 iter_jm = iter(jedi_matches) 

3358 if _timeout: 

3359 for jm in iter_jm: 

3360 try: 

3361 type_ = jm.type 

3362 except Exception: 

3363 if self.debug: 

3364 print("Error in Jedi getting type of ", jm) 

3365 type_ = None 

3366 delta = len(jm.name_with_symbols) - len(jm.complete) 

3367 if type_ == 'function': 

3368 signature = _make_signature(jm) 

3369 else: 

3370 signature = '' 

3371 yield Completion(start=offset - delta, 

3372 end=offset, 

3373 text=jm.name_with_symbols, 

3374 type=type_, 

3375 signature=signature, 

3376 _origin='jedi') 

3377 

3378 if time.monotonic() > deadline: 

3379 break 

3380 

3381 for jm in iter_jm: 

3382 delta = len(jm.name_with_symbols) - len(jm.complete) 

3383 yield Completion( 

3384 start=offset - delta, 

3385 end=offset, 

3386 text=jm.name_with_symbols, 

3387 type=_UNKNOWN_TYPE, # don't compute type for speed 

3388 _origin="jedi", 

3389 signature="", 

3390 ) 

3391 

3392 # TODO: 

3393 # Suppress this, right now just for debug. 

3394 if jedi_matches and non_jedi_results and self.debug: 

3395 some_start_offset = before.rfind( 

3396 next(iter(non_jedi_results.values()))["matched_fragment"] 

3397 ) 

3398 yield Completion( 

3399 start=some_start_offset, 

3400 end=offset, 

3401 text="--jedi/ipython--", 

3402 _origin="debug", 

3403 type="none", 

3404 signature="", 

3405 ) 

3406 

3407 ordered: list[Completion] = [] 

3408 sortable: list[Completion] = [] 

3409 

3410 for origin, result in non_jedi_results.items(): 

3411 matched_text = result["matched_fragment"] 

3412 start_offset = before.rfind(matched_text) 

3413 is_ordered = result.get("ordered", False) 

3414 container = ordered if is_ordered else sortable 

3415 

3416 # I'm unsure if this is always true, so let's assert and see if it 

3417 # crash 

3418 assert before.endswith(matched_text) 

3419 

3420 for simple_completion in result["completions"]: 

3421 completion = Completion( 

3422 start=start_offset, 

3423 end=offset, 

3424 text=simple_completion.text, 

3425 _origin=origin, 

3426 signature="", 

3427 type=simple_completion.type or _UNKNOWN_TYPE, 

3428 ) 

3429 container.append(completion) 

3430 

3431 yield from list(self._deduplicate(ordered + self._sort(sortable)))[ 

3432 :MATCHES_LIMIT 

3433 ] 

3434 

3435 def complete( 

3436 self, text=None, line_buffer=None, cursor_pos=None 

3437 ) -> tuple[str, Sequence[str]]: 

3438 """Find completions for the given text and line context. 

3439 

3440 Note that both the text and the line_buffer are optional, but at least 

3441 one of them must be given. 

3442 

3443 Parameters 

3444 ---------- 

3445 text : string, optional 

3446 Text to perform the completion on. If not given, the line buffer 

3447 is split using the instance's CompletionSplitter object. 

3448 line_buffer : string, optional 

3449 If not given, the completer attempts to obtain the current line 

3450 buffer via readline. This keyword allows clients which are 

3451 requesting for text completions in non-readline contexts to inform 

3452 the completer of the entire text. 

3453 cursor_pos : int, optional 

3454 Index of the cursor in the full line buffer. Should be provided by 

3455 remote frontends where kernel has no access to frontend state. 

3456 

3457 Returns 

3458 ------- 

3459 Tuple of two items: 

3460 text : str 

3461 Text that was actually used in the completion. 

3462 matches : list 

3463 A list of completion matches. 

3464 

3465 Notes 

3466 ----- 

3467 This API is likely to be deprecated and replaced by 

3468 :any:`IPCompleter.completions` in the future. 

3469 

3470 """ 

3471 warnings.warn('`Completer.complete` is pending deprecation since ' 

3472 'IPython 6.0 and will be replaced by `Completer.completions`.', 

3473 PendingDeprecationWarning) 

3474 # potential todo, FOLD the 3rd throw away argument of _complete 

3475 # into the first 2 one. 

3476 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?) 

3477 # TODO: should we deprecate now, or does it stay? 

3478 

3479 results = self._complete( 

3480 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0 

3481 ) 

3482 

3483 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3484 

3485 return self._arrange_and_extract( 

3486 results, 

3487 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version? 

3488 skip_matchers={jedi_matcher_id}, 

3489 # this API does not support different start/end positions (fragments of token). 

3490 abort_if_offset_changes=True, 

3491 ) 

3492 

3493 def _arrange_and_extract( 

3494 self, 

3495 results: dict[str, MatcherResult], 

3496 skip_matchers: set[str], 

3497 abort_if_offset_changes: bool, 

3498 ): 

3499 sortable: list[AnyMatcherCompletion] = [] 

3500 ordered: list[AnyMatcherCompletion] = [] 

3501 most_recent_fragment = None 

3502 for identifier, result in results.items(): 

3503 if identifier in skip_matchers: 

3504 continue 

3505 if not result["completions"]: 

3506 continue 

3507 if not most_recent_fragment: 

3508 most_recent_fragment = result["matched_fragment"] 

3509 if ( 

3510 abort_if_offset_changes 

3511 and result["matched_fragment"] != most_recent_fragment 

3512 ): 

3513 break 

3514 if result.get("ordered", False): 

3515 ordered.extend(result["completions"]) 

3516 else: 

3517 sortable.extend(result["completions"]) 

3518 

3519 if not most_recent_fragment: 

3520 most_recent_fragment = "" # to satisfy typechecker (and just in case) 

3521 

3522 return most_recent_fragment, [ 

3523 m.text for m in self._deduplicate(ordered + self._sort(sortable)) 

3524 ] 

3525 

3526 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None, 

3527 full_text=None) -> _CompleteResult: 

3528 """ 

3529 Like complete but can also returns raw jedi completions as well as the 

3530 origin of the completion text. This could (and should) be made much 

3531 cleaner but that will be simpler once we drop the old (and stateful) 

3532 :any:`complete` API. 

3533 

3534 With current provisional API, cursor_pos act both (depending on the 

3535 caller) as the offset in the ``text`` or ``line_buffer``, or as the 

3536 ``column`` when passing multiline strings this could/should be renamed 

3537 but would add extra noise. 

3538 

3539 Parameters 

3540 ---------- 

3541 cursor_line 

3542 Index of the line the cursor is on. 0 indexed. 

3543 cursor_pos 

3544 Position of the cursor in the current line/line_buffer/text. 0 

3545 indexed. 

3546 line_buffer : optional, str 

3547 The current line the cursor is in, this is mostly due to legacy 

3548 reason that readline could only give a us the single current line. 

3549 Prefer `full_text`. 

3550 text : str 

3551 The current "token" the cursor is in, mostly also for historical 

3552 reasons. as the completer would trigger only after the current line 

3553 was parsed. 

3554 full_text : str 

3555 Full text of the current cell. 

3556 

3557 Returns 

3558 ------- 

3559 An ordered dictionary where keys are identifiers of completion 

3560 matchers and values are ``MatcherResult``s. 

3561 """ 

3562 

3563 # if the cursor position isn't given, the only sane assumption we can 

3564 # make is that it's at the end of the line (the common case) 

3565 if cursor_pos is None: 

3566 cursor_pos = len(line_buffer) if text is None else len(text) 

3567 

3568 if self.use_main_ns: 

3569 self.namespace = __main__.__dict__ 

3570 

3571 # if text is either None or an empty string, rely on the line buffer 

3572 if (not line_buffer) and full_text: 

3573 line_buffer = full_text.split('\n')[cursor_line] 

3574 if not text: # issue #11508: check line_buffer before calling split_line 

3575 text = ( 

3576 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else "" 

3577 ) 

3578 

3579 # If no line buffer is given, assume the input text is all there was 

3580 if line_buffer is None: 

3581 line_buffer = text 

3582 

3583 # deprecated - do not use `line_buffer` in new code. 

3584 self.line_buffer = line_buffer 

3585 self.text_until_cursor = self.line_buffer[:cursor_pos] 

3586 

3587 if not full_text: 

3588 full_text = line_buffer 

3589 

3590 context = CompletionContext( 

3591 full_text=full_text, 

3592 cursor_position=cursor_pos, 

3593 cursor_line=cursor_line, 

3594 token=self._extract_code(text), 

3595 limit=MATCHES_LIMIT, 

3596 ) 

3597 

3598 # Start with a clean slate of completions 

3599 results: dict[str, MatcherResult] = {} 

3600 

3601 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3602 

3603 suppressed_matchers: set[str] = set() 

3604 

3605 matchers = { 

3606 _get_matcher_id(matcher): matcher 

3607 for matcher in sorted( 

3608 self.matchers, key=_get_matcher_priority, reverse=True 

3609 ) 

3610 } 

3611 

3612 for matcher_id, matcher in matchers.items(): 

3613 matcher_id = _get_matcher_id(matcher) 

3614 

3615 if matcher_id in self.disable_matchers: 

3616 continue 

3617 

3618 if matcher_id in results: 

3619 warnings.warn(f"Duplicate matcher ID: {matcher_id}.") 

3620 

3621 if matcher_id in suppressed_matchers: 

3622 continue 

3623 

3624 result: MatcherResult 

3625 try: 

3626 if _is_matcher_v1(matcher): 

3627 result = _convert_matcher_v1_result_to_v2_no_no( 

3628 matcher(text), type=_UNKNOWN_TYPE 

3629 ) 

3630 elif _is_matcher_v2(matcher): 

3631 result = matcher(context) 

3632 else: 

3633 api_version = _get_matcher_api_version(matcher) 

3634 raise ValueError(f"Unsupported API version {api_version}") 

3635 except BaseException: 

3636 # Show the ugly traceback if the matcher causes an 

3637 # exception, but do NOT crash the kernel! 

3638 sys.excepthook(*sys.exc_info()) 

3639 continue 

3640 

3641 # set default value for matched fragment if suffix was not selected. 

3642 result["matched_fragment"] = result.get("matched_fragment", context.token) 

3643 

3644 if not suppressed_matchers: 

3645 suppression_recommended: Union[bool, set[str]] = result.get( 

3646 "suppress", False 

3647 ) 

3648 

3649 suppression_config = ( 

3650 self.suppress_competing_matchers.get(matcher_id, None) 

3651 if isinstance(self.suppress_competing_matchers, dict) 

3652 else self.suppress_competing_matchers 

3653 ) 

3654 should_suppress = ( 

3655 (suppression_config is True) 

3656 or (suppression_recommended and (suppression_config is not False)) 

3657 ) and has_any_completions(result) 

3658 

3659 if should_suppress: 

3660 suppression_exceptions: set[str] = result.get( 

3661 "do_not_suppress", set() 

3662 ) 

3663 if isinstance(suppression_recommended, Iterable): 

3664 to_suppress = set(suppression_recommended) 

3665 else: 

3666 to_suppress = set(matchers) 

3667 suppressed_matchers = to_suppress - suppression_exceptions 

3668 

3669 new_results = {} 

3670 for previous_matcher_id, previous_result in results.items(): 

3671 if previous_matcher_id not in suppressed_matchers: 

3672 new_results[previous_matcher_id] = previous_result 

3673 results = new_results 

3674 

3675 results[matcher_id] = result 

3676 

3677 _, matches = self._arrange_and_extract( 

3678 results, 

3679 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission? 

3680 # if it was omission, we can remove the filtering step, otherwise remove this comment. 

3681 skip_matchers={jedi_matcher_id}, 

3682 abort_if_offset_changes=False, 

3683 ) 

3684 

3685 # populate legacy stateful API 

3686 self.matches = matches 

3687 

3688 return results 

3689 

3690 @staticmethod 

3691 def _deduplicate( 

3692 matches: Sequence[AnyCompletion], 

3693 ) -> Iterable[AnyCompletion]: 

3694 filtered_matches: dict[str, AnyCompletion] = {} 

3695 for match in matches: 

3696 text = match.text 

3697 if ( 

3698 text not in filtered_matches 

3699 or filtered_matches[text].type == _UNKNOWN_TYPE 

3700 ): 

3701 filtered_matches[text] = match 

3702 

3703 return filtered_matches.values() 

3704 

3705 @staticmethod 

3706 def _sort(matches: Sequence[AnyCompletion]): 

3707 return sorted(matches, key=lambda x: completions_sorting_key(x.text)) 

3708 

3709 @context_matcher() 

3710 def fwd_unicode_matcher(self, context: CompletionContext): 

3711 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API.""" 

3712 # TODO: use `context.limit` to terminate early once we matched the maximum 

3713 # number that will be used downstream; can be added as an optional to 

3714 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here. 

3715 fragment, matches = self.fwd_unicode_match(context.text_until_cursor) 

3716 return _convert_matcher_v1_result_to_v2( 

3717 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

3718 ) 

3719 

3720 def fwd_unicode_match(self, text: str) -> tuple[str, Sequence[str]]: 

3721 """ 

3722 Forward match a string starting with a backslash with a list of 

3723 potential Unicode completions. 

3724 

3725 Will compute list of Unicode character names on first call and cache it. 

3726 

3727 .. deprecated:: 8.6 

3728 You can use :meth:`fwd_unicode_matcher` instead. 

3729 

3730 Returns 

3731 ------- 

3732 At tuple with: 

3733 - matched text (empty if no matches) 

3734 - list of potential completions, empty tuple otherwise) 

3735 """ 

3736 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements. 

3737 # We could do a faster match using a Trie. 

3738 

3739 # Using pygtrie the following seem to work: 

3740 

3741 # s = PrefixSet() 

3742 

3743 # for c in range(0,0x10FFFF + 1): 

3744 # try: 

3745 # s.add(unicodedata.name(chr(c))) 

3746 # except ValueError: 

3747 # pass 

3748 # [''.join(k) for k in s.iter(prefix)] 

3749 

3750 # But need to be timed and adds an extra dependency. 

3751 

3752 slashpos = text.rfind('\\') 

3753 # if text starts with slash 

3754 if slashpos > -1: 

3755 # PERF: It's important that we don't access self._unicode_names 

3756 # until we're inside this if-block. _unicode_names is lazily 

3757 # initialized, and it takes a user-noticeable amount of time to 

3758 # initialize it, so we don't want to initialize it unless we're 

3759 # actually going to use it. 

3760 s = text[slashpos + 1 :] 

3761 sup = s.upper() 

3762 candidates = [x for x in self.unicode_names if x.startswith(sup)] 

3763 if candidates: 

3764 return s, candidates 

3765 candidates = [x for x in self.unicode_names if sup in x] 

3766 if candidates: 

3767 return s, candidates 

3768 splitsup = sup.split(" ") 

3769 candidates = [ 

3770 x for x in self.unicode_names if all(u in x for u in splitsup) 

3771 ] 

3772 if candidates: 

3773 return s, candidates 

3774 

3775 return "", () 

3776 

3777 # if text does not start with slash 

3778 else: 

3779 return '', () 

3780 

3781 @property 

3782 def unicode_names(self) -> list[str]: 

3783 """List of names of unicode code points that can be completed. 

3784 

3785 The list is lazily initialized on first access. 

3786 """ 

3787 if self._unicode_names is None: 

3788 names = [] 

3789 for c in range(0,0x10FFFF + 1): 

3790 try: 

3791 names.append(unicodedata.name(chr(c))) 

3792 except ValueError: 

3793 pass 

3794 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES) 

3795 

3796 return self._unicode_names 

3797 

3798 

3799def _unicode_name_compute(ranges: list[tuple[int, int]]) -> list[str]: 

3800 names = [] 

3801 for start,stop in ranges: 

3802 for c in range(start, stop) : 

3803 try: 

3804 names.append(unicodedata.name(chr(c))) 

3805 except ValueError: 

3806 pass 

3807 return names