Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.11/site-packages/IPython/core/completer.py: 19%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

1464 statements  

1"""Completion for IPython. 

2 

3This module started as fork of the rlcompleter module in the Python standard 

4library. The original enhancements made to rlcompleter have been sent 

5upstream and were accepted as of Python 2.3, 

6 

7This module now support a wide variety of completion mechanism both available 

8for normal classic Python code, as well as completer for IPython specific 

9Syntax like magics. 

10 

11Latex and Unicode completion 

12============================ 

13 

14IPython and compatible frontends not only can complete your code, but can help 

15you to input a wide range of characters. In particular we allow you to insert 

16a unicode character using the tab completion mechanism. 

17 

18Forward latex/unicode completion 

19-------------------------------- 

20 

21Forward completion allows you to easily type a unicode character using its latex 

22name, or unicode long description. To do so type a backslash follow by the 

23relevant name and press tab: 

24 

25 

26Using latex completion: 

27 

28.. code:: 

29 

30 \\alpha<tab> 

31 α 

32 

33or using unicode completion: 

34 

35 

36.. code:: 

37 

38 \\GREEK SMALL LETTER ALPHA<tab> 

39 α 

40 

41 

42Only valid Python identifiers will complete. Combining characters (like arrow or 

43dots) are also available, unlike latex they need to be put after the their 

44counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``. 

45 

46Some browsers are known to display combining characters incorrectly. 

47 

48Backward latex completion 

49------------------------- 

50 

51It is sometime challenging to know how to type a character, if you are using 

52IPython, or any compatible frontend you can prepend backslash to the character 

53and press :kbd:`Tab` to expand it to its latex form. 

54 

55.. code:: 

56 

57 \\α<tab> 

58 \\alpha 

59 

60 

61Both forward and backward completions can be deactivated by setting the 

62:std:configtrait:`Completer.backslash_combining_completions` option to 

63``False``. 

64 

65 

66Experimental 

67============ 

68 

69Starting with IPython 6.0, this module can make use of the Jedi library to 

70generate completions both using static analysis of the code, and dynamically 

71inspecting multiple namespaces. Jedi is an autocompletion and static analysis 

72for Python. The APIs attached to this new mechanism is unstable and will 

73raise unless use in an :any:`provisionalcompleter` context manager. 

74 

75You will find that the following are experimental: 

76 

77 - :any:`provisionalcompleter` 

78 - :any:`IPCompleter.completions` 

79 - :any:`Completion` 

80 - :any:`rectify_completions` 

81 

82.. note:: 

83 

84 better name for :any:`rectify_completions` ? 

85 

86We welcome any feedback on these new API, and we also encourage you to try this 

87module in debug mode (start IPython with ``--Completer.debug=True``) in order 

88to have extra logging information if :mod:`jedi` is crashing, or if current 

89IPython completer pending deprecations are returning results not yet handled 

90by :mod:`jedi` 

91 

92Using Jedi for tab completion allow snippets like the following to work without 

93having to execute any code: 

94 

95 >>> myvar = ['hello', 42] 

96 ... myvar[1].bi<tab> 

97 

98Tab completion will be able to infer that ``myvar[1]`` is a real number without 

99executing almost any code unlike the deprecated :any:`IPCompleter.greedy` 

100option. 

101 

102Be sure to update :mod:`jedi` to the latest stable version or to try the 

103current development version to get better completions. 

104 

105Matchers 

106======== 

107 

108All completions routines are implemented using unified *Matchers* API. 

109The matchers API is provisional and subject to change without notice. 

110 

111The built-in matchers include: 

112 

113- :any:`IPCompleter.dict_key_matcher`: dictionary key completions, 

114- :any:`IPCompleter.magic_matcher`: completions for magics, 

115- :any:`IPCompleter.unicode_name_matcher`, 

116 :any:`IPCompleter.fwd_unicode_matcher` 

117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_, 

118- :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_, 

119- :any:`IPCompleter.file_matcher`: paths to files and directories, 

120- :any:`IPCompleter.python_func_kw_matcher` - function keywords, 

121- :any:`IPCompleter.python_matches` - globals and attributes (v1 API), 

122- ``IPCompleter.jedi_matcher`` - static analysis with Jedi, 

123- :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default 

124 implementation in :any:`InteractiveShell` which uses IPython hooks system 

125 (`complete_command`) with string dispatch (including regular expressions). 

126 Differently to other matchers, ``custom_completer_matcher`` will not suppress 

127 Jedi results to match behaviour in earlier IPython versions. 

128 

129Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list. 

130 

131Matcher API 

132----------- 

133 

134Simplifying some details, the ``Matcher`` interface can described as 

135 

136.. code-block:: 

137 

138 MatcherAPIv1 = Callable[[str], list[str]] 

139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult] 

140 

141 Matcher = MatcherAPIv1 | MatcherAPIv2 

142 

143The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0 

144and remains supported as a simplest way for generating completions. This is also 

145currently the only API supported by the IPython hooks system `complete_command`. 

146 

147To distinguish between matcher versions ``matcher_api_version`` attribute is used. 

148More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers, 

149and requires a literal ``2`` for v2 Matchers. 

150 

151Once the API stabilises future versions may relax the requirement for specifying 

152``matcher_api_version`` by switching to :func:`functools.singledispatch`, therefore 

153please do not rely on the presence of ``matcher_api_version`` for any purposes. 

154 

155Suppression of competing matchers 

156--------------------------------- 

157 

158By default results from all matchers are combined, in the order determined by 

159their priority. Matchers can request to suppress results from subsequent 

160matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``. 

161 

162When multiple matchers simultaneously request suppression, the results from of 

163the matcher with higher priority will be returned. 

164 

165Sometimes it is desirable to suppress most but not all other matchers; 

166this can be achieved by adding a set of identifiers of matchers which 

167should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key. 

168 

169The suppression behaviour can is user-configurable via 

170:std:configtrait:`IPCompleter.suppress_competing_matchers`. 

171""" 

172 

173 

174# Copyright (c) IPython Development Team. 

175# Distributed under the terms of the Modified BSD License. 

176# 

177# Some of this code originated from rlcompleter in the Python standard library 

178# Copyright (C) 2001 Python Software Foundation, www.python.org 

179 

180from __future__ import annotations 

181import builtins as builtin_mod 

182import enum 

183import glob 

184import inspect 

185import itertools 

186import keyword 

187import ast 

188import os 

189import re 

190import string 

191import sys 

192import tokenize 

193import time 

194import unicodedata 

195import uuid 

196import warnings 

197from ast import literal_eval 

198from collections import defaultdict 

199from contextlib import contextmanager 

200from dataclasses import dataclass 

201from functools import cached_property, partial 

202from types import SimpleNamespace 

203from typing import ( 

204 Union, 

205 Any, 

206 Optional, 

207 TYPE_CHECKING, 

208 TypeVar, 

209 Literal, 

210) 

211from collections.abc import Iterable, Iterator, Sequence, Sized 

212 

213from IPython.core.guarded_eval import ( 

214 guarded_eval, 

215 EvaluationContext, 

216 _validate_policy_overrides, 

217) 

218from IPython.core.error import TryNext, UsageError 

219from IPython.core.inputtransformer2 import ESC_MAGIC 

220from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol 

221from IPython.testing.skipdoctest import skip_doctest 

222from IPython.utils import generics 

223from IPython.utils.PyColorize import theme_table 

224from IPython.utils.decorators import sphinx_options 

225from IPython.utils.dir2 import dir2, get_real_method 

226from IPython.utils.path import ensure_dir_exists 

227from IPython.utils.process import arg_split 

228from traitlets import ( 

229 Bool, 

230 Enum, 

231 Int, 

232 List as ListTrait, 

233 Unicode, 

234 Dict as DictTrait, 

235 DottedObjectName, 

236 Union as UnionTrait, 

237 observe, 

238) 

239from traitlets.config.configurable import Configurable 

240from traitlets.utils.importstring import import_item 

241 

242import __main__ 

243 

244from typing import cast 

245 

246if sys.version_info < (3, 12): 

247 from typing_extensions import TypedDict, Protocol 

248 from typing import NotRequired, TypeAlias, TypeGuard 

249else: 

250 from typing import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard 

251 

252 

253# skip module docstests 

254__skip_doctest__ = True 

255 

256 

257try: 

258 import jedi 

259 jedi.settings.case_insensitive_completion = False 

260 import jedi.api.helpers 

261 import jedi.api.classes 

262 JEDI_INSTALLED = True 

263except ImportError: 

264 JEDI_INSTALLED = False 

265 

266 

267# ----------------------------------------------------------------------------- 

268# Globals 

269#----------------------------------------------------------------------------- 

270 

271# ranges where we have most of the valid unicode names. We could be more finer 

272# grained but is it worth it for performance While unicode have character in the 

273# range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I 

274# write this). With below range we cover them all, with a density of ~67% 

275# biggest next gap we consider only adds up about 1% density and there are 600 

276# gaps that would need hard coding. 

277_UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)] 

278 

279# Public API 

280__all__ = ["Completer", "IPCompleter"] 

281 

282if sys.platform == 'win32': 

283 PROTECTABLES = ' ' 

284else: 

285 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&' 

286 

287# Protect against returning an enormous number of completions which the frontend 

288# may have trouble processing. 

289MATCHES_LIMIT = 500 

290 

291# Completion type reported when no type can be inferred. 

292_UNKNOWN_TYPE = "<unknown>" 

293 

294# sentinel value to signal lack of a match 

295not_found = object() 

296 

297class ProvisionalCompleterWarning(FutureWarning): 

298 """ 

299 Exception raise by an experimental feature in this module. 

300 

301 Wrap code in :any:`provisionalcompleter` context manager if you 

302 are certain you want to use an unstable feature. 

303 """ 

304 pass 

305 

306warnings.filterwarnings('error', category=ProvisionalCompleterWarning) 

307 

308 

309@skip_doctest 

310@contextmanager 

311def provisionalcompleter(action='ignore'): 

312 """ 

313 This context manager has to be used in any place where unstable completer 

314 behavior and API may be called. 

315 

316 >>> with provisionalcompleter(): 

317 ... completer.do_experimental_things() # works 

318 

319 >>> completer.do_experimental_things() # raises. 

320 

321 .. note:: 

322 

323 Unstable 

324 

325 By using this context manager you agree that the API in use may change 

326 without warning, and that you won't complain if they do so. 

327 

328 You also understand that, if the API is not to your liking, you should report 

329 a bug to explain your use case upstream. 

330 

331 We'll be happy to get your feedback, feature requests, and improvements on 

332 any of the unstable APIs! 

333 """ 

334 with warnings.catch_warnings(): 

335 warnings.filterwarnings(action, category=ProvisionalCompleterWarning) 

336 yield 

337 

338 

339def has_open_quotes(s: str) -> Union[str, bool]: 

340 """Return whether a string has open quotes. 

341 

342 This simply counts whether the number of quote characters of either type in 

343 the string is odd. 

344 

345 Returns 

346 ------- 

347 If there is an open quote, the quote character is returned. Else, return 

348 False. 

349 """ 

350 # We check " first, then ', so complex cases with nested quotes will get 

351 # the " to take precedence. 

352 if s.count('"') % 2: 

353 return '"' 

354 elif s.count("'") % 2: 

355 return "'" 

356 else: 

357 return False 

358 

359 

360def protect_filename(s: str, protectables: str = PROTECTABLES) -> str: 

361 """Escape a string to protect certain characters.""" 

362 if set(s) & set(protectables): 

363 if sys.platform == "win32": 

364 return '"' + s + '"' 

365 else: 

366 return "".join(("\\" + c if c in protectables else c) for c in s) 

367 else: 

368 return s 

369 

370 

371def expand_user(path: str) -> tuple[str, bool, str]: 

372 """Expand ``~``-style usernames in strings. 

373 

374 This is similar to :func:`os.path.expanduser`, but it computes and returns 

375 extra information that will be useful if the input was being used in 

376 computing completions, and you wish to return the completions with the 

377 original '~' instead of its expanded value. 

378 

379 Parameters 

380 ---------- 

381 path : str 

382 String to be expanded. If no ~ is present, the output is the same as the 

383 input. 

384 

385 Returns 

386 ------- 

387 newpath : str 

388 Result of ~ expansion in the input path. 

389 tilde_expand : bool 

390 Whether any expansion was performed or not. 

391 tilde_val : str 

392 The value that ~ was replaced with. 

393 """ 

394 # Default values 

395 tilde_expand = False 

396 tilde_val = '' 

397 newpath = path 

398 

399 if path.startswith('~'): 

400 tilde_expand = True 

401 rest = len(path)-1 

402 newpath = os.path.expanduser(path) 

403 if rest: 

404 tilde_val = newpath[:-rest] 

405 else: 

406 tilde_val = newpath 

407 

408 return newpath, tilde_expand, tilde_val 

409 

410 

411def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str: 

412 """Does the opposite of expand_user, with its outputs. 

413 """ 

414 if tilde_expand: 

415 return path.replace(tilde_val, '~') 

416 else: 

417 return path 

418 

419 

420def completions_sorting_key(word): 

421 """key for sorting completions 

422 

423 This does several things: 

424 

425 - Demote any completions starting with underscores to the end 

426 - Insert any %magic and %%cellmagic completions in the alphabetical order 

427 by their name 

428 """ 

429 prio1, prio2 = 0, 0 

430 

431 if word.startswith('__'): 

432 prio1 = 2 

433 elif word.startswith('_'): 

434 prio1 = 1 

435 

436 if word.endswith('='): 

437 prio1 = -1 

438 

439 if word.startswith('%%'): 

440 # If there's another % in there, this is something else, so leave it alone 

441 if "%" not in word[2:]: 

442 word = word[2:] 

443 prio2 = 2 

444 elif word.startswith('%'): 

445 if "%" not in word[1:]: 

446 word = word[1:] 

447 prio2 = 1 

448 

449 return prio1, word, prio2 

450 

451 

452class _FakeJediCompletion: 

453 """ 

454 This is a workaround to communicate to the UI that Jedi has crashed and to 

455 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true. 

456 

457 Added in IPython 6.0 so should likely be removed for 7.0 

458 

459 """ 

460 

461 def __init__(self, name): 

462 

463 self.name = name 

464 self.complete = name 

465 self.type = 'crashed' 

466 self.name_with_symbols = name 

467 self.signature = "" 

468 self._origin = "fake" 

469 self.text = "crashed" 

470 

471 def __repr__(self): 

472 return '<Fake completion object jedi has crashed>' 

473 

474 

475_JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion] 

476 

477 

478class Completion: 

479 """ 

480 Completion object used and returned by IPython completers. 

481 

482 .. warning:: 

483 

484 Unstable 

485 

486 This function is unstable, API may change without warning. 

487 It will also raise unless use in proper context manager. 

488 

489 This act as a middle ground :any:`Completion` object between the 

490 :class:`jedi.api.classes.Completion` object and the Prompt Toolkit completion 

491 object. While Jedi need a lot of information about evaluator and how the 

492 code should be ran/inspected, PromptToolkit (and other frontend) mostly 

493 need user facing information. 

494 

495 - Which range should be replaced replaced by what. 

496 - Some metadata (like completion type), or meta information to displayed to 

497 the use user. 

498 

499 For debugging purpose we can also store the origin of the completion (``jedi``, 

500 ``IPython.python_matches``, ``IPython.magics_matches``...). 

501 """ 

502 

503 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin'] 

504 

505 def __init__( 

506 self, 

507 start: int, 

508 end: int, 

509 text: str, 

510 *, 

511 type: Optional[str] = None, 

512 _origin="", 

513 signature="", 

514 ) -> None: 

515 warnings.warn( 

516 "``Completion`` is a provisional API (as of IPython 6.0). " 

517 "It may change without warnings. " 

518 "Use in corresponding context manager.", 

519 category=ProvisionalCompleterWarning, 

520 stacklevel=2, 

521 ) 

522 

523 self.start = start 

524 self.end = end 

525 self.text = text 

526 self.type = type 

527 self.signature = signature 

528 self._origin = _origin 

529 

530 def __repr__(self): 

531 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \ 

532 (self.start, self.end, self.text, self.type or '?', self.signature or '?') 

533 

534 def __eq__(self, other) -> bool: 

535 """ 

536 Equality and hash do not hash the type (as some completer may not be 

537 able to infer the type), but are use to (partially) de-duplicate 

538 completion. 

539 

540 Completely de-duplicating completion is a bit tricker that just 

541 comparing as it depends on surrounding text, which Completions are not 

542 aware of. 

543 """ 

544 return self.start == other.start and \ 

545 self.end == other.end and \ 

546 self.text == other.text 

547 

548 def __hash__(self): 

549 return hash((self.start, self.end, self.text)) 

550 

551 

552class SimpleCompletion: 

553 """Completion item to be included in the dictionary returned by new-style Matcher (API v2). 

554 

555 .. warning:: 

556 

557 Provisional 

558 

559 This class is used to describe the currently supported attributes of 

560 simple completion items, and any additional implementation details 

561 should not be relied on. Additional attributes may be included in 

562 future versions, and meaning of text disambiguated from the current 

563 dual meaning of "text to insert" and "text to used as a label". 

564 """ 

565 

566 __slots__ = ["text", "type"] 

567 

568 def __init__(self, text: str, *, type: Optional[str] = None): 

569 self.text = text 

570 self.type = type 

571 

572 def __repr__(self): 

573 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>" 

574 

575 

576class _MatcherResultBase(TypedDict): 

577 """Definition of dictionary to be returned by new-style Matcher (API v2).""" 

578 

579 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token. 

580 matched_fragment: NotRequired[str] 

581 

582 #: Whether to suppress results from all other matchers (True), some 

583 #: matchers (set of identifiers) or none (False); default is False. 

584 suppress: NotRequired[Union[bool, set[str]]] 

585 

586 #: Identifiers of matchers which should NOT be suppressed when this matcher 

587 #: requests to suppress all other matchers; defaults to an empty set. 

588 do_not_suppress: NotRequired[set[str]] 

589 

590 #: Are completions already ordered and should be left as-is? default is False. 

591 ordered: NotRequired[bool] 

592 

593 

594@sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"]) 

595class SimpleMatcherResult(_MatcherResultBase, TypedDict): 

596 """Result of new-style completion matcher.""" 

597 

598 # note: TypedDict is added again to the inheritance chain 

599 # in order to get __orig_bases__ for documentation 

600 

601 #: List of candidate completions 

602 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion] 

603 

604 

605class _JediMatcherResult(_MatcherResultBase): 

606 """Matching result returned by Jedi (will be processed differently)""" 

607 

608 #: list of candidate completions 

609 completions: Iterator[_JediCompletionLike] 

610 

611 

612AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion] 

613AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion) 

614 

615 

616@dataclass 

617class CompletionContext: 

618 """Completion context provided as an argument to matchers in the Matcher API v2.""" 

619 

620 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`) 

621 # which was not explicitly visible as an argument of the matcher, making any refactor 

622 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers 

623 # from the completer, and make substituting them in sub-classes easier. 

624 

625 #: Relevant fragment of code directly preceding the cursor. 

626 #: The extraction of token is implemented via splitter heuristic 

627 #: (following readline behaviour for legacy reasons), which is user configurable 

628 #: (by switching the greedy mode). 

629 token: str 

630 

631 #: The full available content of the editor or buffer 

632 full_text: str 

633 

634 #: Cursor position in the line (the same for ``full_text`` and ``text``). 

635 cursor_position: int 

636 

637 #: Cursor line in ``full_text``. 

638 cursor_line: int 

639 

640 #: The maximum number of completions that will be used downstream. 

641 #: Matchers can use this information to abort early. 

642 #: The built-in Jedi matcher is currently excepted from this limit. 

643 # If not given, return all possible completions. 

644 limit: Optional[int] 

645 

646 @cached_property 

647 def text_until_cursor(self) -> str: 

648 return self.line_with_cursor[: self.cursor_position] 

649 

650 @cached_property 

651 def line_with_cursor(self) -> str: 

652 return self.full_text.split("\n")[self.cursor_line] 

653 

654 

655#: Matcher results for API v2. 

656MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult] 

657 

658 

659class _MatcherAPIv1Base(Protocol): 

660 def __call__(self, text: str) -> list[str]: 

661 """Call signature.""" 

662 ... 

663 

664 #: Used to construct the default matcher identifier 

665 __qualname__: str 

666 

667 

668class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol): 

669 #: API version 

670 matcher_api_version: Optional[Literal[1]] 

671 

672 def __call__(self, text: str) -> list[str]: 

673 """Call signature.""" 

674 ... 

675 

676 

677#: Protocol describing Matcher API v1. 

678MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total] 

679 

680 

681class MatcherAPIv2(Protocol): 

682 """Protocol describing Matcher API v2.""" 

683 

684 #: API version 

685 matcher_api_version: Literal[2] = 2 

686 

687 def __call__(self, context: CompletionContext) -> MatcherResult: 

688 """Call signature.""" 

689 ... 

690 

691 #: Used to construct the default matcher identifier 

692 __qualname__: str 

693 

694 

695Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2] 

696 

697 

698def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]: 

699 api_version = _get_matcher_api_version(matcher) 

700 return api_version == 1 

701 

702 

703def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]: 

704 api_version = _get_matcher_api_version(matcher) 

705 return api_version == 2 

706 

707 

708def _is_sizable(value: Any) -> TypeGuard[Sized]: 

709 """Determines whether objects is sizable""" 

710 return hasattr(value, "__len__") 

711 

712 

713def _is_iterator(value: Any) -> TypeGuard[Iterator]: 

714 """Determines whether objects is sizable""" 

715 return hasattr(value, "__next__") 

716 

717 

718def has_any_completions(result: MatcherResult) -> bool: 

719 """Check if any result includes any completions.""" 

720 completions = result["completions"] 

721 if _is_sizable(completions): 

722 return len(completions) != 0 

723 if _is_iterator(completions): 

724 try: 

725 old_iterator = completions 

726 first = next(old_iterator) 

727 result["completions"] = cast( 

728 Iterator[SimpleCompletion], 

729 itertools.chain([first], old_iterator), 

730 ) 

731 return True 

732 except StopIteration: 

733 return False 

734 raise ValueError( 

735 "Completions returned by matcher need to be an Iterator or a Sizable" 

736 ) 

737 

738 

739def completion_matcher( 

740 *, 

741 priority: Optional[float] = None, 

742 identifier: Optional[str] = None, 

743 api_version: int = 1, 

744) -> Callable[[Matcher], Matcher]: 

745 """Adds attributes describing the matcher. 

746 

747 Parameters 

748 ---------- 

749 priority : Optional[float] 

750 The priority of the matcher, determines the order of execution of matchers. 

751 Higher priority means that the matcher will be executed first. Defaults to 0. 

752 identifier : Optional[str] 

753 identifier of the matcher allowing users to modify the behaviour via traitlets, 

754 and also used to for debugging (will be passed as ``origin`` with the completions). 

755 

756 Defaults to matcher function's ``__qualname__`` (for example, 

757 ``IPCompleter.file_matcher`` for the built-in matched defined 

758 as a ``file_matcher`` method of the ``IPCompleter`` class). 

759 api_version: Optional[int] 

760 version of the Matcher API used by this matcher. 

761 Currently supported values are 1 and 2. 

762 Defaults to 1. 

763 """ 

764 

765 def wrapper(func: Matcher): 

766 func.matcher_priority = priority or 0 # type: ignore 

767 func.matcher_identifier = identifier or func.__qualname__ # type: ignore 

768 func.matcher_api_version = api_version # type: ignore 

769 if TYPE_CHECKING: 

770 if api_version == 1: 

771 func = cast(MatcherAPIv1, func) 

772 elif api_version == 2: 

773 func = cast(MatcherAPIv2, func) 

774 return func 

775 

776 return wrapper 

777 

778 

779def _get_matcher_priority(matcher: Matcher): 

780 return getattr(matcher, "matcher_priority", 0) 

781 

782 

783def _get_matcher_id(matcher: Matcher): 

784 return getattr(matcher, "matcher_identifier", matcher.__qualname__) 

785 

786 

787def _get_matcher_api_version(matcher): 

788 return getattr(matcher, "matcher_api_version", 1) 

789 

790 

791context_matcher = partial(completion_matcher, api_version=2) 

792 

793 

794_IC = Iterable[Completion] 

795 

796 

797def _deduplicate_completions(text: str, completions: _IC)-> _IC: 

798 """ 

799 Deduplicate a set of completions. 

800 

801 .. warning:: 

802 

803 Unstable 

804 

805 This function is unstable, API may change without warning. 

806 

807 Parameters 

808 ---------- 

809 text : str 

810 text that should be completed. 

811 completions : Iterator[Completion] 

812 iterator over the completions to deduplicate 

813 

814 Yields 

815 ------ 

816 `Completions` objects 

817 Completions coming from multiple sources, may be different but end up having 

818 the same effect when applied to ``text``. If this is the case, this will 

819 consider completions as equal and only emit the first encountered. 

820 Not folded in `completions()` yet for debugging purpose, and to detect when 

821 the IPython completer does return things that Jedi does not, but should be 

822 at some point. 

823 """ 

824 completions = list(completions) 

825 if not completions: 

826 return 

827 

828 new_start = min(c.start for c in completions) 

829 new_end = max(c.end for c in completions) 

830 

831 seen = set() 

832 for c in completions: 

833 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

834 if new_text not in seen: 

835 yield c 

836 seen.add(new_text) 

837 

838 

839def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC: 

840 """ 

841 Rectify a set of completions to all have the same ``start`` and ``end`` 

842 

843 .. warning:: 

844 

845 Unstable 

846 

847 This function is unstable, API may change without warning. 

848 It will also raise unless use in proper context manager. 

849 

850 Parameters 

851 ---------- 

852 text : str 

853 text that should be completed. 

854 completions : Iterator[Completion] 

855 iterator over the completions to rectify 

856 _debug : bool 

857 Log failed completion 

858 

859 Notes 

860 ----- 

861 :class:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though 

862 the Jupyter Protocol requires them to behave like so. This will readjust 

863 the completion to have the same ``start`` and ``end`` by padding both 

864 extremities with surrounding text. 

865 

866 During stabilisation should support a ``_debug`` option to log which 

867 completion are return by the IPython completer and not found in Jedi in 

868 order to make upstream bug report. 

869 """ 

870 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). " 

871 "It may change without warnings. " 

872 "Use in corresponding context manager.", 

873 category=ProvisionalCompleterWarning, stacklevel=2) 

874 

875 completions = list(completions) 

876 if not completions: 

877 return 

878 starts = (c.start for c in completions) 

879 ends = (c.end for c in completions) 

880 

881 new_start = min(starts) 

882 new_end = max(ends) 

883 

884 seen_jedi = set() 

885 seen_python_matches = set() 

886 for c in completions: 

887 new_text = text[new_start:c.start] + c.text + text[c.end:new_end] 

888 if c._origin == 'jedi': 

889 seen_jedi.add(new_text) 

890 elif c._origin == "IPCompleter.python_matcher": 

891 seen_python_matches.add(new_text) 

892 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature) 

893 diff = seen_python_matches.difference(seen_jedi) 

894 if diff and _debug: 

895 print('IPython.python matches have extras:', diff) 

896 

897 

898if sys.platform == 'win32': 

899 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?' 

900else: 

901 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?' 

902 

903GREEDY_DELIMS = ' =\r\n' 

904 

905 

906class CompletionSplitter: 

907 """An object to split an input line in a manner similar to readline. 

908 

909 By having our own implementation, we can expose readline-like completion in 

910 a uniform manner to all frontends. This object only needs to be given the 

911 line of text to be split and the cursor position on said line, and it 

912 returns the 'word' to be completed on at the cursor after splitting the 

913 entire line. 

914 

915 What characters are used as splitting delimiters can be controlled by 

916 setting the ``delims`` attribute (this is a property that internally 

917 automatically builds the necessary regular expression)""" 

918 

919 # Private interface 

920 

921 # A string of delimiter characters. The default value makes sense for 

922 # IPython's most typical usage patterns. 

923 _delims = DELIMS 

924 

925 # The expression (a normal string) to be compiled into a regular expression 

926 # for actual splitting. We store it as an attribute mostly for ease of 

927 # debugging, since this type of code can be so tricky to debug. 

928 _delim_expr = None 

929 

930 # The regular expression that does the actual splitting 

931 _delim_re = None 

932 

933 def __init__(self, delims=None): 

934 delims = CompletionSplitter._delims if delims is None else delims 

935 self.delims = delims 

936 

937 @property 

938 def delims(self): 

939 """Return the string of delimiter characters.""" 

940 return self._delims 

941 

942 @delims.setter 

943 def delims(self, delims): 

944 """Set the delimiters for line splitting.""" 

945 expr = '[' + ''.join('\\'+ c for c in delims) + ']' 

946 self._delim_re = re.compile(expr) 

947 self._delims = delims 

948 self._delim_expr = expr 

949 

950 def split_line(self, line, cursor_pos=None): 

951 """Split a line of text with a cursor at the given position. 

952 """ 

953 cut_line = line if cursor_pos is None else line[:cursor_pos] 

954 return self._delim_re.split(cut_line)[-1] 

955 

956 

957class Completer(Configurable): 

958 

959 greedy = Bool( 

960 False, 

961 help="""Activate greedy completion. 

962 

963 .. deprecated:: 8.8 

964 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead. 

965 

966 When enabled in IPython 8.8 or newer, changes configuration as follows: 

967 

968 - ``Completer.evaluation = 'unsafe'`` 

969 - ``Completer.auto_close_dict_keys = True`` 

970 """, 

971 ).tag(config=True) 

972 

973 evaluation = Enum( 

974 ("forbidden", "minimal", "limited", "unsafe", "dangerous"), 

975 default_value="limited", 

976 help="""Policy for code evaluation under completion. 

977 

978 Successive options allow to enable more eager evaluation for better 

979 completion suggestions, including for nested dictionaries, nested lists, 

980 or even results of function calls. 

981 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user 

982 code on :kbd:`Tab` with potentially unwanted or dangerous side effects. 

983 

984 Allowed values are: 

985 

986 - ``forbidden``: no evaluation of code is permitted, 

987 - ``minimal``: evaluation of literals and access to built-in namespace; 

988 no item/attribute evaluation, no access to locals/globals, 

989 no evaluation of any operations or comparisons. 

990 - ``limited``: access to all namespaces, evaluation of hard-coded methods 

991 (for example: :py:meth:`dict.keys`, :py:meth:`object.__getattr__`, 

992 :py:meth:`object.__getitem__`) on allow-listed objects (for example: 

993 :py:class:`dict`, :py:class:`list`, :py:class:`tuple`, ``pandas.Series``), 

994 - ``unsafe``: evaluation of all methods and function calls but not of 

995 syntax with side-effects like `del x`, 

996 - ``dangerous``: completely arbitrary evaluation; does not support auto-import. 

997 

998 To override specific elements of the policy, you can use ``policy_overrides`` trait. 

999 """, 

1000 ).tag(config=True) 

1001 

1002 use_jedi = Bool(default_value=JEDI_INSTALLED, 

1003 help="Experimental: Use Jedi to generate autocompletions. " 

1004 "Default to True if jedi is installed.").tag(config=True) 

1005 

1006 jedi_compute_type_timeout = Int(default_value=400, 

1007 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types. 

1008 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt 

1009 performance by preventing jedi to build its cache. 

1010 """).tag(config=True) 

1011 

1012 debug = Bool(default_value=False, 

1013 help='Enable debug for the Completer. Mostly print extra ' 

1014 'information for experimental jedi integration.')\ 

1015 .tag(config=True) 

1016 

1017 backslash_combining_completions = Bool(True, 

1018 help="Enable unicode completions, e.g. \\alpha<tab> . " 

1019 "Includes completion of latex commands, unicode names, and expanding " 

1020 "unicode characters back to latex commands.").tag(config=True) 

1021 

1022 auto_close_dict_keys = Bool( 

1023 False, 

1024 help=""" 

1025 Enable auto-closing dictionary keys. 

1026 

1027 When enabled string keys will be suffixed with a final quote 

1028 (matching the opening quote), tuple keys will also receive a 

1029 separating comma if needed, and keys which are final will 

1030 receive a closing bracket (``]``). 

1031 """, 

1032 ).tag(config=True) 

1033 

1034 policy_overrides = DictTrait( 

1035 default_value={}, 

1036 key_trait=Unicode(), 

1037 help="""Overrides for policy evaluation. 

1038 

1039 For example, to enable auto-import on completion specify: 

1040 

1041 .. code-block:: 

1042 

1043 ipython --Completer.policy_overrides='{"allow_auto_import": True}' --Completer.use_jedi=False 

1044 

1045 """, 

1046 ).tag(config=True) 

1047 

1048 @observe("evaluation") 

1049 def _evaluation_changed(self, _change): 

1050 _validate_policy_overrides( 

1051 policy_name=self.evaluation, policy_overrides=self.policy_overrides 

1052 ) 

1053 

1054 @observe("policy_overrides") 

1055 def _policy_overrides_changed(self, _change): 

1056 _validate_policy_overrides( 

1057 policy_name=self.evaluation, policy_overrides=self.policy_overrides 

1058 ) 

1059 

1060 auto_import_method = DottedObjectName( 

1061 default_value="importlib.import_module", 

1062 allow_none=True, 

1063 help="""\ 

1064 Provisional: 

1065 This is a provisional API in IPython 9.3, it may change without warnings. 

1066 

1067 A fully qualified path to an auto-import method for use by completer. 

1068 The function should take a single string and return `ModuleType` and 

1069 can raise `ImportError` exception if module is not found. 

1070 

1071 The default auto-import implementation does not populate the user namespace with the imported module. 

1072 """, 

1073 ).tag(config=True) 

1074 

1075 def __init__(self, namespace=None, global_namespace=None, **kwargs): 

1076 """Create a new completer for the command line. 

1077 

1078 Completer(namespace=ns, global_namespace=ns2) -> completer instance. 

1079 

1080 If unspecified, the default namespace where completions are performed 

1081 is __main__ (technically, __main__.__dict__). Namespaces should be 

1082 given as dictionaries. 

1083 

1084 An optional second namespace can be given. This allows the completer 

1085 to handle cases where both the local and global scopes need to be 

1086 distinguished. 

1087 """ 

1088 

1089 # Don't bind to namespace quite yet, but flag whether the user wants a 

1090 # specific namespace or to use __main__.__dict__. This will allow us 

1091 # to bind to __main__.__dict__ at completion time, not now. 

1092 if namespace is None: 

1093 self.use_main_ns = True 

1094 else: 

1095 self.use_main_ns = False 

1096 self.namespace = namespace 

1097 

1098 # The global namespace, if given, can be bound directly 

1099 if global_namespace is None: 

1100 self.global_namespace = {} 

1101 else: 

1102 self.global_namespace = global_namespace 

1103 

1104 self.custom_matchers = [] 

1105 

1106 super(Completer, self).__init__(**kwargs) 

1107 

1108 def complete(self, text, state): 

1109 """Return the next possible completion for 'text'. 

1110 

1111 This is called successively with state == 0, 1, 2, ... until it 

1112 returns None. The completion should begin with 'text'. 

1113 

1114 """ 

1115 if self.use_main_ns: 

1116 self.namespace = __main__.__dict__ 

1117 

1118 if state == 0: 

1119 if "." in text: 

1120 self.matches = self.attr_matches(text) 

1121 else: 

1122 self.matches = self.global_matches(text) 

1123 try: 

1124 return self.matches[state] 

1125 except IndexError: 

1126 return None 

1127 

1128 def global_matches(self, text: str, context: Optional[CompletionContext] = None): 

1129 """Compute matches when text is a simple name. 

1130 

1131 Return a list of all keywords, built-in functions and names currently 

1132 defined in self.namespace or self.global_namespace that match. 

1133 

1134 """ 

1135 matches = [] 

1136 match_append = matches.append 

1137 n = len(text) 

1138 

1139 search_lists = [ 

1140 keyword.kwlist, 

1141 builtin_mod.__dict__.keys(), 

1142 list(self.namespace.keys()), 

1143 list(self.global_namespace.keys()), 

1144 ] 

1145 if context and context.full_text.count("\n") > 1: 

1146 # try to evaluate on full buffer 

1147 previous_lines = "\n".join( 

1148 context.full_text.split("\n")[: context.cursor_line] 

1149 ) 

1150 if previous_lines: 

1151 all_code_lines_before_cursor = ( 

1152 self._extract_code(previous_lines) + "\n" + text 

1153 ) 

1154 context = EvaluationContext( 

1155 globals=self.global_namespace, 

1156 locals=self.namespace, 

1157 evaluation=self.evaluation, 

1158 auto_import=self._auto_import, 

1159 policy_overrides=self.policy_overrides, 

1160 ) 

1161 try: 

1162 obj = guarded_eval( 

1163 all_code_lines_before_cursor, 

1164 context, 

1165 ) 

1166 except Exception as e: 

1167 if self.debug: 

1168 warnings.warn(f"Evaluation exception {e}") 

1169 

1170 search_lists.append(list(context.transient_locals.keys())) 

1171 

1172 for lst in search_lists: 

1173 for word in lst: 

1174 if word[:n] == text and word != "__builtins__": 

1175 match_append(word) 

1176 

1177 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z") 

1178 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]: 

1179 shortened = { 

1180 "_".join([sub[0] for sub in word.split("_")]): word 

1181 for word in lst 

1182 if snake_case_re.match(word) 

1183 } 

1184 for word in shortened.keys(): 

1185 if word[:n] == text and word != "__builtins__": 

1186 match_append(shortened[word]) 

1187 

1188 return matches 

1189 

1190 def attr_matches(self, text): 

1191 """Compute matches when text contains a dot. 

1192 

1193 Assuming the text is of the form NAME.NAME....[NAME], and is 

1194 evaluatable in self.namespace or self.global_namespace, it will be 

1195 evaluated and its attributes (as revealed by dir()) are used as 

1196 possible completions. (For class instances, class members are 

1197 also considered.) 

1198 

1199 WARNING: this can still invoke arbitrary C code, if an object 

1200 with a __getattr__ hook is evaluated. 

1201 

1202 """ 

1203 return self._attr_matches(text)[0] 

1204 

1205 # we simple attribute matching with normal identifiers. 

1206 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$") 

1207 

1208 def _strip_code_before_operator(self, code: str) -> str: 

1209 o_parens = {"(", "[", "{"} 

1210 c_parens = {")", "]", "}"} 

1211 

1212 # Dry-run tokenize to catch errors 

1213 try: 

1214 _ = list(tokenize.generate_tokens(iter(code.splitlines()).__next__)) 

1215 except tokenize.TokenError: 

1216 # Try trimming the expression and retrying 

1217 trimmed_code = self._trim_expr(code) 

1218 try: 

1219 _ = list( 

1220 tokenize.generate_tokens(iter(trimmed_code.splitlines()).__next__) 

1221 ) 

1222 code = trimmed_code 

1223 except tokenize.TokenError: 

1224 return code 

1225 

1226 tokens = _parse_tokens(code) 

1227 encountered_operator = False 

1228 after_operator = [] 

1229 nesting_level = 0 

1230 

1231 for t in tokens: 

1232 if t.type == tokenize.OP: 

1233 if t.string in o_parens: 

1234 nesting_level += 1 

1235 elif t.string in c_parens: 

1236 nesting_level -= 1 

1237 elif t.string != "." and nesting_level == 0: 

1238 encountered_operator = True 

1239 after_operator = [] 

1240 continue 

1241 

1242 if encountered_operator: 

1243 after_operator.append(t.string) 

1244 

1245 if encountered_operator: 

1246 return "".join(after_operator) 

1247 else: 

1248 return code 

1249 

1250 def _extract_code(self, line: str): 

1251 """No-op in Completer, but can be used in subclasses to customise behaviour""" 

1252 return line 

1253 

1254 def _attr_matches( 

1255 self, 

1256 text: str, 

1257 include_prefix: bool = True, 

1258 context: Optional[CompletionContext] = None, 

1259 ) -> tuple[Sequence[str], str]: 

1260 m2 = self._ATTR_MATCH_RE.match(text) 

1261 if not m2: 

1262 return [], "" 

1263 expr, attr = m2.group(1, 2) 

1264 try: 

1265 expr = self._strip_code_before_operator(expr) 

1266 except tokenize.TokenError: 

1267 pass 

1268 

1269 obj = self._evaluate_expr(expr) 

1270 if obj is not_found: 

1271 if context: 

1272 # try to evaluate on full buffer 

1273 previous_lines = "\n".join( 

1274 context.full_text.split("\n")[: context.cursor_line] 

1275 ) 

1276 if previous_lines: 

1277 all_code_lines_before_cursor = ( 

1278 self._extract_code(previous_lines) + "\n" + expr 

1279 ) 

1280 obj = self._evaluate_expr(all_code_lines_before_cursor) 

1281 

1282 if obj is not_found: 

1283 return [], "" 

1284 

1285 if self.limit_to__all__ and hasattr(obj, '__all__'): 

1286 words = get__all__entries(obj) 

1287 else: 

1288 words = dir2(obj) 

1289 

1290 try: 

1291 words = generics.complete_object(obj, words) 

1292 except TryNext: 

1293 pass 

1294 except AssertionError: 

1295 raise 

1296 except Exception: 

1297 # Silence errors from completion function 

1298 pass 

1299 # Build match list to return 

1300 n = len(attr) 

1301 

1302 # Note: ideally we would just return words here and the prefix 

1303 # reconciliator would know that we intend to append to rather than 

1304 # replace the input text; this requires refactoring to return range 

1305 # which ought to be replaced (as does jedi). 

1306 if include_prefix: 

1307 tokens = _parse_tokens(expr) 

1308 rev_tokens = reversed(tokens) 

1309 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1310 name_turn = True 

1311 

1312 parts = [] 

1313 for token in rev_tokens: 

1314 if token.type in skip_over: 

1315 continue 

1316 if token.type == tokenize.NAME and name_turn: 

1317 parts.append(token.string) 

1318 name_turn = False 

1319 elif ( 

1320 token.type == tokenize.OP and token.string == "." and not name_turn 

1321 ): 

1322 parts.append(token.string) 

1323 name_turn = True 

1324 else: 

1325 # short-circuit if not empty nor name token 

1326 break 

1327 

1328 prefix_after_space = "".join(reversed(parts)) 

1329 else: 

1330 prefix_after_space = "" 

1331 

1332 return ( 

1333 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr], 

1334 "." + attr, 

1335 ) 

1336 

1337 def _trim_expr(self, code: str) -> str: 

1338 """ 

1339 Trim the code until it is a valid expression and not a tuple; 

1340 

1341 return the trimmed expression for guarded_eval. 

1342 """ 

1343 while code: 

1344 code = code[1:] 

1345 try: 

1346 res = ast.parse(code) 

1347 except SyntaxError: 

1348 continue 

1349 

1350 assert res is not None 

1351 if len(res.body) != 1: 

1352 continue 

1353 if not isinstance(res.body[0], ast.Expr): 

1354 continue 

1355 expr = res.body[0].value 

1356 if isinstance(expr, ast.Tuple) and not code[-1] == ")": 

1357 # we skip implicit tuple, like when trimming `fun(a,b`<completion> 

1358 # as `a,b` would be a tuple, and we actually expect to get only `b` 

1359 continue 

1360 return code 

1361 return "" 

1362 

1363 def _evaluate_expr(self, expr): 

1364 obj = not_found 

1365 done = False 

1366 while not done and expr: 

1367 try: 

1368 obj = guarded_eval( 

1369 expr, 

1370 EvaluationContext( 

1371 globals=self.global_namespace, 

1372 locals=self.namespace, 

1373 evaluation=self.evaluation, 

1374 auto_import=self._auto_import, 

1375 policy_overrides=self.policy_overrides, 

1376 ), 

1377 ) 

1378 done = True 

1379 except (SyntaxError, TypeError) as e: 

1380 if self.debug: 

1381 warnings.warn(f"Trimming because of {e}") 

1382 # TypeError can show up with something like `+ d` 

1383 # where `d` is a dictionary. 

1384 

1385 # trim the expression to remove any invalid prefix 

1386 # e.g. user starts `(d[`, so we get `expr = '(d'`, 

1387 # where parenthesis is not closed. 

1388 # TODO: make this faster by reusing parts of the computation? 

1389 expr = self._trim_expr(expr) 

1390 except Exception as e: 

1391 if self.debug: 

1392 warnings.warn(f"Evaluation exception {e}") 

1393 done = True 

1394 if self.debug: 

1395 warnings.warn(f"Resolved to {obj}") 

1396 return obj 

1397 

1398 @property 

1399 def _auto_import(self): 

1400 if self.auto_import_method is None: 

1401 return None 

1402 if not hasattr(self, "_auto_import_func"): 

1403 self._auto_import_func = import_item(self.auto_import_method) 

1404 return self._auto_import_func 

1405 

1406 

1407def get__all__entries(obj): 

1408 """returns the strings in the __all__ attribute""" 

1409 try: 

1410 words = getattr(obj, '__all__') 

1411 except Exception: 

1412 return [] 

1413 

1414 return [w for w in words if isinstance(w, str)] 

1415 

1416 

1417class _DictKeyState(enum.Flag): 

1418 """Represent state of the key match in context of other possible matches. 

1419 

1420 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple. 

1421 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`. 

1422 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added. 

1423 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}` 

1424 """ 

1425 

1426 BASELINE = 0 

1427 END_OF_ITEM = enum.auto() 

1428 END_OF_TUPLE = enum.auto() 

1429 IN_TUPLE = enum.auto() 

1430 

1431 

1432def _parse_tokens(c): 

1433 """Parse tokens even if there is an error.""" 

1434 tokens = [] 

1435 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__) 

1436 while True: 

1437 try: 

1438 tokens.append(next(token_generator)) 

1439 except tokenize.TokenError: 

1440 return tokens 

1441 except StopIteration: 

1442 return tokens 

1443 

1444 

1445def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]: 

1446 """Match any valid Python numeric literal in a prefix of dictionary keys. 

1447 

1448 References: 

1449 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals 

1450 - https://docs.python.org/3/library/tokenize.html 

1451 """ 

1452 if prefix[-1].isspace(): 

1453 # if user typed a space we do not have anything to complete 

1454 # even if there was a valid number token before 

1455 return None 

1456 tokens = _parse_tokens(prefix) 

1457 rev_tokens = reversed(tokens) 

1458 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE} 

1459 number = None 

1460 for token in rev_tokens: 

1461 if token.type in skip_over: 

1462 continue 

1463 if number is None: 

1464 if token.type == tokenize.NUMBER: 

1465 number = token.string 

1466 continue 

1467 else: 

1468 # we did not match a number 

1469 return None 

1470 if token.type == tokenize.OP: 

1471 if token.string == ",": 

1472 break 

1473 if token.string in {"+", "-"}: 

1474 number = token.string + number 

1475 else: 

1476 return None 

1477 return number 

1478 

1479 

1480_INT_FORMATS = { 

1481 "0b": bin, 

1482 "0o": oct, 

1483 "0x": hex, 

1484} 

1485 

1486 

1487def match_dict_keys( 

1488 keys: list[Union[str, bytes, tuple[Union[str, bytes], ...]]], 

1489 prefix: str, 

1490 delims: str, 

1491 extra_prefix: Optional[tuple[Union[str, bytes], ...]] = None, 

1492) -> tuple[str, int, dict[str, _DictKeyState]]: 

1493 """Used by dict_key_matches, matching the prefix to a list of keys 

1494 

1495 Parameters 

1496 ---------- 

1497 keys 

1498 list of keys in dictionary currently being completed. 

1499 prefix 

1500 Part of the text already typed by the user. E.g. `mydict[b'fo` 

1501 delims 

1502 String of delimiters to consider when finding the current key. 

1503 extra_prefix : optional 

1504 Part of the text already typed in multi-key index cases. E.g. for 

1505 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`. 

1506 

1507 Returns 

1508 ------- 

1509 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with 

1510 ``quote`` being the quote that need to be used to close current string. 

1511 ``token_start`` the position where the replacement should start occurring, 

1512 ``matches`` a dictionary of replacement/completion keys on keys and values 

1513 indicating whether the state. 

1514 """ 

1515 prefix_tuple = extra_prefix if extra_prefix else () 

1516 

1517 prefix_tuple_size = sum( 

1518 [ 

1519 # for pandas, do not count slices as taking space 

1520 not isinstance(k, slice) 

1521 for k in prefix_tuple 

1522 ] 

1523 ) 

1524 text_serializable_types = (str, bytes, int, float, slice) 

1525 

1526 def filter_prefix_tuple(key): 

1527 # Reject too short keys 

1528 if len(key) <= prefix_tuple_size: 

1529 return False 

1530 # Reject keys which cannot be serialised to text 

1531 for k in key: 

1532 if not isinstance(k, text_serializable_types): 

1533 return False 

1534 # Reject keys that do not match the prefix 

1535 for k, pt in zip(key, prefix_tuple): 

1536 if k != pt and not isinstance(pt, slice): 

1537 return False 

1538 # All checks passed! 

1539 return True 

1540 

1541 filtered_key_is_final: dict[ 

1542 Union[str, bytes, int, float], _DictKeyState 

1543 ] = defaultdict(lambda: _DictKeyState.BASELINE) 

1544 

1545 for k in keys: 

1546 # If at least one of the matches is not final, mark as undetermined. 

1547 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where 

1548 # `111` appears final on first match but is not final on the second. 

1549 

1550 if isinstance(k, tuple): 

1551 if filter_prefix_tuple(k): 

1552 key_fragment = k[prefix_tuple_size] 

1553 filtered_key_is_final[key_fragment] |= ( 

1554 _DictKeyState.END_OF_TUPLE 

1555 if len(k) == prefix_tuple_size + 1 

1556 else _DictKeyState.IN_TUPLE 

1557 ) 

1558 elif prefix_tuple_size > 0: 

1559 # we are completing a tuple but this key is not a tuple, 

1560 # so we should ignore it 

1561 pass 

1562 else: 

1563 if isinstance(k, text_serializable_types): 

1564 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM 

1565 

1566 filtered_keys = filtered_key_is_final.keys() 

1567 

1568 if not prefix: 

1569 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()} 

1570 

1571 quote_match = re.search("(?:\"|')", prefix) 

1572 is_user_prefix_numeric = False 

1573 

1574 if quote_match: 

1575 quote = quote_match.group() 

1576 valid_prefix = prefix + quote 

1577 try: 

1578 prefix_str = literal_eval(valid_prefix) 

1579 except Exception: 

1580 return "", 0, {} 

1581 else: 

1582 # If it does not look like a string, let's assume 

1583 # we are dealing with a number or variable. 

1584 number_match = _match_number_in_dict_key_prefix(prefix) 

1585 

1586 # We do not want the key matcher to suggest variable names so we yield: 

1587 if number_match is None: 

1588 # The alternative would be to assume that user forgort the quote 

1589 # and if the substring matches, suggest adding it at the start. 

1590 return "", 0, {} 

1591 

1592 prefix_str = number_match 

1593 is_user_prefix_numeric = True 

1594 quote = "" 

1595 

1596 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$' 

1597 token_match = re.search(pattern, prefix, re.UNICODE) 

1598 assert token_match is not None # silence mypy 

1599 token_start = token_match.start() 

1600 token_prefix = token_match.group() 

1601 

1602 matched: dict[str, _DictKeyState] = {} 

1603 

1604 str_key: Union[str, bytes] 

1605 

1606 for key in filtered_keys: 

1607 if isinstance(key, (int, float)): 

1608 # User typed a number but this key is not a number. 

1609 if not is_user_prefix_numeric: 

1610 continue 

1611 str_key = str(key) 

1612 if isinstance(key, int): 

1613 int_base = prefix_str[:2].lower() 

1614 # if user typed integer using binary/oct/hex notation: 

1615 if int_base in _INT_FORMATS: 

1616 int_format = _INT_FORMATS[int_base] 

1617 str_key = int_format(key) 

1618 else: 

1619 # User typed a string but this key is a number. 

1620 if is_user_prefix_numeric: 

1621 continue 

1622 str_key = key 

1623 try: 

1624 if not str_key.startswith(prefix_str): 

1625 continue 

1626 except (AttributeError, TypeError, UnicodeError): 

1627 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa 

1628 continue 

1629 

1630 # reformat remainder of key to begin with prefix 

1631 rem = str_key[len(prefix_str) :] 

1632 # force repr wrapped in ' 

1633 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"') 

1634 rem_repr = rem_repr[1 + rem_repr.index("'"):-2] 

1635 if quote == '"': 

1636 # The entered prefix is quoted with ", 

1637 # but the match is quoted with '. 

1638 # A contained " hence needs escaping for comparison: 

1639 rem_repr = rem_repr.replace('"', '\\"') 

1640 

1641 # then reinsert prefix from start of token 

1642 match = "%s%s" % (token_prefix, rem_repr) 

1643 

1644 matched[match] = filtered_key_is_final[key] 

1645 return quote, token_start, matched 

1646 

1647 

1648def cursor_to_position(text:str, line:int, column:int)->int: 

1649 """ 

1650 Convert the (line,column) position of the cursor in text to an offset in a 

1651 string. 

1652 

1653 Parameters 

1654 ---------- 

1655 text : str 

1656 The text in which to calculate the cursor offset 

1657 line : int 

1658 Line of the cursor; 0-indexed 

1659 column : int 

1660 Column of the cursor 0-indexed 

1661 

1662 Returns 

1663 ------- 

1664 Position of the cursor in ``text``, 0-indexed. 

1665 

1666 See Also 

1667 -------- 

1668 position_to_cursor : reciprocal of this function 

1669 

1670 """ 

1671 lines = text.split('\n') 

1672 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines))) 

1673 

1674 return sum(len(line) + 1 for line in lines[:line]) + column 

1675 

1676 

1677def position_to_cursor(text: str, offset: int) -> tuple[int, int]: 

1678 """ 

1679 Convert the position of the cursor in text (0 indexed) to a line 

1680 number(0-indexed) and a column number (0-indexed) pair 

1681 

1682 Position should be a valid position in ``text``. 

1683 

1684 Parameters 

1685 ---------- 

1686 text : str 

1687 The text in which to calculate the cursor offset 

1688 offset : int 

1689 Position of the cursor in ``text``, 0-indexed. 

1690 

1691 Returns 

1692 ------- 

1693 (line, column) : (int, int) 

1694 Line of the cursor; 0-indexed, column of the cursor 0-indexed 

1695 

1696 See Also 

1697 -------- 

1698 cursor_to_position : reciprocal of this function 

1699 

1700 """ 

1701 

1702 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text)) 

1703 

1704 before = text[:offset] 

1705 blines = before.split('\n') # ! splitnes trim trailing \n 

1706 line = before.count('\n') 

1707 col = len(blines[-1]) 

1708 return line, col 

1709 

1710 

1711def _safe_isinstance(obj, module, class_name, *attrs): 

1712 """Checks if obj is an instance of module.class_name if loaded 

1713 """ 

1714 if module in sys.modules: 

1715 m = sys.modules[module] 

1716 for attr in [class_name, *attrs]: 

1717 m = getattr(m, attr) 

1718 return isinstance(obj, m) 

1719 

1720 

1721@context_matcher() 

1722def back_unicode_name_matcher(context: CompletionContext): 

1723 """Match Unicode characters back to Unicode name 

1724 

1725 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API. 

1726 """ 

1727 fragment, matches = back_unicode_name_matches(context.text_until_cursor) 

1728 return _convert_matcher_v1_result_to_v2( 

1729 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

1730 ) 

1731 

1732 

1733def back_unicode_name_matches(text: str) -> tuple[str, Sequence[str]]: 

1734 """Match Unicode characters back to Unicode name 

1735 

1736 This does ``☃`` -> ``\\snowman`` 

1737 

1738 Note that snowman is not a valid python3 combining character but will be expanded. 

1739 Though it will not recombine back to the snowman character by the completion machinery. 

1740 

1741 This will not either back-complete standard sequences like \\n, \\b ... 

1742 

1743 .. deprecated:: 8.6 

1744 You can use :meth:`back_unicode_name_matcher` instead. 

1745 

1746 Returns 

1747 ======= 

1748 

1749 Return a tuple with two elements: 

1750 

1751 - The Unicode character that was matched (preceded with a backslash), or 

1752 empty string, 

1753 - a sequence (of 1), name for the match Unicode character, preceded by 

1754 backslash, or empty if no match. 

1755 """ 

1756 if len(text)<2: 

1757 return '', () 

1758 maybe_slash = text[-2] 

1759 if maybe_slash != '\\': 

1760 return '', () 

1761 

1762 char = text[-1] 

1763 # no expand on quote for completion in strings. 

1764 # nor backcomplete standard ascii keys 

1765 if char in string.ascii_letters or char in ('"',"'"): 

1766 return '', () 

1767 try : 

1768 unic = unicodedata.name(char) 

1769 return '\\'+char,('\\'+unic,) 

1770 except KeyError: 

1771 pass 

1772 return '', () 

1773 

1774 

1775@context_matcher() 

1776def back_latex_name_matcher(context: CompletionContext) -> SimpleMatcherResult: 

1777 """Match latex characters back to unicode name 

1778 

1779 This does ``\\ℵ`` -> ``\\aleph`` 

1780 """ 

1781 

1782 text = context.text_until_cursor 

1783 no_match = { 

1784 "completions": [], 

1785 "suppress": False, 

1786 } 

1787 

1788 if len(text)<2: 

1789 return no_match 

1790 maybe_slash = text[-2] 

1791 if maybe_slash != '\\': 

1792 return no_match 

1793 

1794 char = text[-1] 

1795 # no expand on quote for completion in strings. 

1796 # nor backcomplete standard ascii keys 

1797 if char in string.ascii_letters or char in ('"',"'"): 

1798 return no_match 

1799 try : 

1800 latex = reverse_latex_symbol[char] 

1801 # '\\' replace the \ as well 

1802 return { 

1803 "completions": [SimpleCompletion(text=latex, type="latex")], 

1804 "suppress": True, 

1805 "matched_fragment": "\\" + char, 

1806 } 

1807 except KeyError: 

1808 pass 

1809 

1810 return no_match 

1811 

1812def _formatparamchildren(parameter) -> str: 

1813 """ 

1814 Get parameter name and value from Jedi Private API 

1815 

1816 Jedi does not expose a simple way to get `param=value` from its API. 

1817 

1818 Parameters 

1819 ---------- 

1820 parameter 

1821 Jedi's function `Param` 

1822 

1823 Returns 

1824 ------- 

1825 A string like 'a', 'b=1', '*args', '**kwargs' 

1826 

1827 """ 

1828 description = parameter.description 

1829 if not description.startswith('param '): 

1830 raise ValueError('Jedi function parameter description have change format.' 

1831 'Expected "param ...", found %r".' % description) 

1832 return description[6:] 

1833 

1834def _make_signature(completion)-> str: 

1835 """ 

1836 Make the signature from a jedi completion 

1837 

1838 Parameters 

1839 ---------- 

1840 completion : jedi.Completion 

1841 object does not complete a function type 

1842 

1843 Returns 

1844 ------- 

1845 a string consisting of the function signature, with the parenthesis but 

1846 without the function name. example: 

1847 `(a, *args, b=1, **kwargs)` 

1848 

1849 """ 

1850 

1851 # it looks like this might work on jedi 0.17 

1852 if hasattr(completion, 'get_signatures'): 

1853 signatures = completion.get_signatures() 

1854 if not signatures: 

1855 return '(?)' 

1856 

1857 c0 = completion.get_signatures()[0] 

1858 return '('+c0.to_string().split('(', maxsplit=1)[1] 

1859 

1860 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures() 

1861 for p in signature.defined_names()) if f]) 

1862 

1863 

1864_CompleteResult = dict[str, MatcherResult] 

1865 

1866 

1867DICT_MATCHER_REGEX = re.compile( 

1868 r"""(?x) 

1869( # match dict-referring - or any get item object - expression 

1870 .+ 

1871) 

1872\[ # open bracket 

1873\s* # and optional whitespace 

1874# Capture any number of serializable objects (e.g. "a", "b", 'c') 

1875# and slices 

1876((?:(?: 

1877 (?: # closed string 

1878 [uUbB]? # string prefix (r not handled) 

1879 (?: 

1880 '(?:[^']|(?<!\\)\\')*' 

1881 | 

1882 "(?:[^"]|(?<!\\)\\")*" 

1883 ) 

1884 ) 

1885 | 

1886 # capture integers and slices 

1887 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2} 

1888 | 

1889 # integer in bin/hex/oct notation 

1890 0[bBxXoO]_?(?:\w|\d)+ 

1891 ) 

1892 \s*,\s* 

1893)*) 

1894((?: 

1895 (?: # unclosed string 

1896 [uUbB]? # string prefix (r not handled) 

1897 (?: 

1898 '(?:[^']|(?<!\\)\\')* 

1899 | 

1900 "(?:[^"]|(?<!\\)\\")* 

1901 ) 

1902 ) 

1903 | 

1904 # unfinished integer 

1905 (?:[-+]?\d+) 

1906 | 

1907 # integer in bin/hex/oct notation 

1908 0[bBxXoO]_?(?:\w|\d)+ 

1909 ) 

1910)? 

1911$ 

1912""" 

1913) 

1914 

1915 

1916def _convert_matcher_v1_result_to_v2_no_no( 

1917 matches: Sequence[str], 

1918 type: str, 

1919) -> SimpleMatcherResult: 

1920 """same as _convert_matcher_v1_result_to_v2 but fragment=None, and suppress_if_matches is False by construction""" 

1921 return SimpleMatcherResult( 

1922 completions=[SimpleCompletion(text=match, type=type) for match in matches], 

1923 suppress=False, 

1924 ) 

1925 

1926 

1927def _convert_matcher_v1_result_to_v2( 

1928 matches: Sequence[str], 

1929 type: str, 

1930 fragment: Optional[str] = None, 

1931 suppress_if_matches: bool = False, 

1932) -> SimpleMatcherResult: 

1933 """Utility to help with transition""" 

1934 result = { 

1935 "completions": [SimpleCompletion(text=match, type=type) for match in matches], 

1936 "suppress": (True if matches else False) if suppress_if_matches else False, 

1937 } 

1938 if fragment is not None: 

1939 result["matched_fragment"] = fragment 

1940 return cast(SimpleMatcherResult, result) 

1941 

1942 

1943class IPCompleter(Completer): 

1944 """Extension of the completer class with IPython-specific features""" 

1945 

1946 @observe('greedy') 

1947 def _greedy_changed(self, change): 

1948 """update the splitter and readline delims when greedy is changed""" 

1949 if change["new"]: 

1950 self.evaluation = "unsafe" 

1951 self.auto_close_dict_keys = True 

1952 self.splitter.delims = GREEDY_DELIMS 

1953 else: 

1954 self.evaluation = "limited" 

1955 self.auto_close_dict_keys = False 

1956 self.splitter.delims = DELIMS 

1957 

1958 dict_keys_only = Bool( 

1959 False, 

1960 help=""" 

1961 Whether to show dict key matches only. 

1962 

1963 (disables all matchers except for `IPCompleter.dict_key_matcher`). 

1964 """, 

1965 ) 

1966 

1967 suppress_competing_matchers = UnionTrait( 

1968 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))], 

1969 default_value=None, 

1970 help=""" 

1971 Whether to suppress completions from other *Matchers*. 

1972 

1973 When set to ``None`` (default) the matchers will attempt to auto-detect 

1974 whether suppression of other matchers is desirable. For example, at 

1975 the beginning of a line followed by `%` we expect a magic completion 

1976 to be the only applicable option, and after ``my_dict['`` we usually 

1977 expect a completion with an existing dictionary key. 

1978 

1979 If you want to disable this heuristic and see completions from all matchers, 

1980 set ``IPCompleter.suppress_competing_matchers = False``. 

1981 To disable the heuristic for specific matchers provide a dictionary mapping: 

1982 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``. 

1983 

1984 Set ``IPCompleter.suppress_competing_matchers = True`` to limit 

1985 completions to the set of matchers with the highest priority; 

1986 this is equivalent to ``IPCompleter.merge_completions`` and 

1987 can be beneficial for performance, but will sometimes omit relevant 

1988 candidates from matchers further down the priority list. 

1989 """, 

1990 ).tag(config=True) 

1991 

1992 merge_completions = Bool( 

1993 True, 

1994 help="""Whether to merge completion results into a single list 

1995 

1996 If False, only the completion results from the first non-empty 

1997 completer will be returned. 

1998 

1999 As of version 8.6.0, setting the value to ``False`` is an alias for: 

2000 ``IPCompleter.suppress_competing_matchers = True.``. 

2001 """, 

2002 ).tag(config=True) 

2003 

2004 disable_matchers = ListTrait( 

2005 Unicode(), 

2006 help="""List of matchers to disable. 

2007 

2008 The list should contain matcher identifiers (see :any:`completion_matcher`). 

2009 """, 

2010 ).tag(config=True) 

2011 

2012 omit__names = Enum( 

2013 (0, 1, 2), 

2014 default_value=2, 

2015 help="""Instruct the completer to omit private method names 

2016 

2017 Specifically, when completing on ``object.<tab>``. 

2018 

2019 When 2 [default]: all names that start with '_' will be excluded. 

2020 

2021 When 1: all 'magic' names (``__foo__``) will be excluded. 

2022 

2023 When 0: nothing will be excluded. 

2024 """ 

2025 ).tag(config=True) 

2026 limit_to__all__ = Bool(False, 

2027 help=""" 

2028 DEPRECATED as of version 5.0. 

2029 

2030 Instruct the completer to use __all__ for the completion 

2031 

2032 Specifically, when completing on ``object.<tab>``. 

2033 

2034 When True: only those names in obj.__all__ will be included. 

2035 

2036 When False [default]: the __all__ attribute is ignored 

2037 """, 

2038 ).tag(config=True) 

2039 

2040 profile_completions = Bool( 

2041 default_value=False, 

2042 help="If True, emit profiling data for completion subsystem using cProfile." 

2043 ).tag(config=True) 

2044 

2045 profiler_output_dir = Unicode( 

2046 default_value=".completion_profiles", 

2047 help="Template for path at which to output profile data for completions." 

2048 ).tag(config=True) 

2049 

2050 @observe('limit_to__all__') 

2051 def _limit_to_all_changed(self, change): 

2052 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration ' 

2053 'value has been deprecated since IPython 5.0, will be made to have ' 

2054 'no effects and then removed in future version of IPython.', 

2055 UserWarning) 

2056 

2057 def __init__( 

2058 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs 

2059 ): 

2060 """IPCompleter() -> completer 

2061 

2062 Return a completer object. 

2063 

2064 Parameters 

2065 ---------- 

2066 shell 

2067 a pointer to the ipython shell itself. This is needed 

2068 because this completer knows about magic functions, and those can 

2069 only be accessed via the ipython instance. 

2070 namespace : dict, optional 

2071 an optional dict where completions are performed. 

2072 global_namespace : dict, optional 

2073 secondary optional dict for completions, to 

2074 handle cases (such as IPython embedded inside functions) where 

2075 both Python scopes are visible. 

2076 config : Config 

2077 traitlet's config object 

2078 **kwargs 

2079 passed to super class unmodified. 

2080 """ 

2081 

2082 self.magic_escape = ESC_MAGIC 

2083 self.splitter = CompletionSplitter() 

2084 

2085 # _greedy_changed() depends on splitter and readline being defined: 

2086 super().__init__( 

2087 namespace=namespace, 

2088 global_namespace=global_namespace, 

2089 config=config, 

2090 **kwargs, 

2091 ) 

2092 

2093 # List where completion matches will be stored 

2094 self.matches = [] 

2095 self.shell = shell 

2096 # Regexp to split filenames with spaces in them 

2097 self.space_name_re = re.compile(r'([^\\] )') 

2098 # Hold a local ref. to glob.glob for speed 

2099 self.glob = glob.glob 

2100 

2101 # Determine if we are running on 'dumb' terminals, like (X)Emacs 

2102 # buffers, to avoid completion problems. 

2103 term = os.environ.get('TERM','xterm') 

2104 self.dumb_terminal = term in ['dumb','emacs'] 

2105 

2106 # Special handling of backslashes needed in win32 platforms 

2107 if sys.platform == "win32": 

2108 self.clean_glob = self._clean_glob_win32 

2109 else: 

2110 self.clean_glob = self._clean_glob 

2111 

2112 #regexp to parse docstring for function signature 

2113 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2114 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2115 #use this if positional argument name is also needed 

2116 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)') 

2117 

2118 self.magic_arg_matchers = [ 

2119 self.magic_config_matcher, 

2120 self.magic_color_matcher, 

2121 ] 

2122 

2123 # This is set externally by InteractiveShell 

2124 self.custom_completers = None 

2125 

2126 # This is a list of names of unicode characters that can be completed 

2127 # into their corresponding unicode value. The list is large, so we 

2128 # lazily initialize it on first use. Consuming code should access this 

2129 # attribute through the `@unicode_names` property. 

2130 self._unicode_names = None 

2131 

2132 self._backslash_combining_matchers = [ 

2133 self.latex_name_matcher, 

2134 self.unicode_name_matcher, 

2135 back_latex_name_matcher, 

2136 back_unicode_name_matcher, 

2137 self.fwd_unicode_matcher, 

2138 ] 

2139 

2140 if not self.backslash_combining_completions: 

2141 for matcher in self._backslash_combining_matchers: 

2142 self.disable_matchers.append(_get_matcher_id(matcher)) 

2143 

2144 if not self.merge_completions: 

2145 self.suppress_competing_matchers = True 

2146 

2147 @property 

2148 def matchers(self) -> list[Matcher]: 

2149 """All active matcher routines for completion""" 

2150 if self.dict_keys_only: 

2151 return [self.dict_key_matcher] 

2152 

2153 if self.use_jedi: 

2154 return [ 

2155 *self.custom_matchers, 

2156 *self._backslash_combining_matchers, 

2157 *self.magic_arg_matchers, 

2158 self.custom_completer_matcher, 

2159 self.magic_matcher, 

2160 self._jedi_matcher, 

2161 self.dict_key_matcher, 

2162 self.file_matcher, 

2163 ] 

2164 else: 

2165 return [ 

2166 *self.custom_matchers, 

2167 *self._backslash_combining_matchers, 

2168 *self.magic_arg_matchers, 

2169 self.custom_completer_matcher, 

2170 self.dict_key_matcher, 

2171 self.magic_matcher, 

2172 self.python_matcher, 

2173 self.file_matcher, 

2174 self.python_func_kw_matcher, 

2175 ] 

2176 

2177 def all_completions(self, text: str) -> list[str]: 

2178 """ 

2179 Wrapper around the completion methods for the benefit of emacs. 

2180 """ 

2181 prefix = text.rpartition('.')[0] 

2182 with provisionalcompleter(): 

2183 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text 

2184 for c in self.completions(text, len(text))] 

2185 

2186 return self.complete(text)[1] 

2187 

2188 def _clean_glob(self, text:str): 

2189 return self.glob("%s*" % text) 

2190 

2191 def _clean_glob_win32(self, text:str): 

2192 return [f.replace("\\","/") 

2193 for f in self.glob("%s*" % text)] 

2194 

2195 @context_matcher() 

2196 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2197 """Match filenames, expanding ~USER type strings. 

2198 

2199 Most of the seemingly convoluted logic in this completer is an 

2200 attempt to handle filenames with spaces in them. And yet it's not 

2201 quite perfect, because Python's readline doesn't expose all of the 

2202 GNU readline details needed for this to be done correctly. 

2203 

2204 For a filename with a space in it, the printed completions will be 

2205 only the parts after what's already been typed (instead of the 

2206 full completions, as is normally done). I don't think with the 

2207 current (as of Python 2.3) Python readline it's possible to do 

2208 better. 

2209 """ 

2210 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter, 

2211 # starts with `/home/`, `C:\`, etc) 

2212 

2213 text = context.token 

2214 code_until_cursor = self._extract_code(context.text_until_cursor) 

2215 completion_type = self._determine_completion_context(code_until_cursor) 

2216 in_cli_context = self._is_completing_in_cli_context(code_until_cursor) 

2217 if ( 

2218 completion_type == self._CompletionContextType.ATTRIBUTE 

2219 and not in_cli_context 

2220 ): 

2221 return { 

2222 "completions": [], 

2223 "suppress": False, 

2224 } 

2225 

2226 # chars that require escaping with backslash - i.e. chars 

2227 # that readline treats incorrectly as delimiters, but we 

2228 # don't want to treat as delimiters in filename matching 

2229 # when escaped with backslash 

2230 if text.startswith('!'): 

2231 text = text[1:] 

2232 text_prefix = u'!' 

2233 else: 

2234 text_prefix = u'' 

2235 

2236 text_until_cursor = self.text_until_cursor 

2237 # track strings with open quotes 

2238 open_quotes = has_open_quotes(text_until_cursor) 

2239 

2240 if '(' in text_until_cursor or '[' in text_until_cursor: 

2241 lsplit = text 

2242 else: 

2243 try: 

2244 # arg_split ~ shlex.split, but with unicode bugs fixed by us 

2245 lsplit = arg_split(text_until_cursor)[-1] 

2246 except ValueError: 

2247 # typically an unmatched ", or backslash without escaped char. 

2248 if open_quotes: 

2249 lsplit = text_until_cursor.split(open_quotes)[-1] 

2250 else: 

2251 return { 

2252 "completions": [], 

2253 "suppress": False, 

2254 } 

2255 except IndexError: 

2256 # tab pressed on empty line 

2257 lsplit = "" 

2258 

2259 if not open_quotes and lsplit != protect_filename(lsplit): 

2260 # if protectables are found, do matching on the whole escaped name 

2261 has_protectables = True 

2262 text0,text = text,lsplit 

2263 else: 

2264 has_protectables = False 

2265 text = os.path.expanduser(text) 

2266 

2267 if text == "": 

2268 return { 

2269 "completions": [ 

2270 SimpleCompletion( 

2271 text=text_prefix + protect_filename(f), type="path" 

2272 ) 

2273 for f in self.glob("*") 

2274 ], 

2275 "suppress": False, 

2276 } 

2277 

2278 # Compute the matches from the filesystem 

2279 if sys.platform == 'win32': 

2280 m0 = self.clean_glob(text) 

2281 else: 

2282 m0 = self.clean_glob(text.replace('\\', '')) 

2283 

2284 if has_protectables: 

2285 # If we had protectables, we need to revert our changes to the 

2286 # beginning of filename so that we don't double-write the part 

2287 # of the filename we have so far 

2288 len_lsplit = len(lsplit) 

2289 matches = [text_prefix + text0 + 

2290 protect_filename(f[len_lsplit:]) for f in m0] 

2291 else: 

2292 if open_quotes: 

2293 # if we have a string with an open quote, we don't need to 

2294 # protect the names beyond the quote (and we _shouldn't_, as 

2295 # it would cause bugs when the filesystem call is made). 

2296 matches = m0 if sys.platform == "win32" else\ 

2297 [protect_filename(f, open_quotes) for f in m0] 

2298 else: 

2299 matches = [text_prefix + 

2300 protect_filename(f) for f in m0] 

2301 

2302 # Mark directories in input list by appending '/' to their names. 

2303 return { 

2304 "completions": [ 

2305 SimpleCompletion(text=x + "/" if os.path.isdir(x) else x, type="path") 

2306 for x in matches 

2307 ], 

2308 "suppress": False, 

2309 } 

2310 

2311 def _extract_code(self, line: str) -> str: 

2312 """Extract code from magics if any.""" 

2313 

2314 if not line: 

2315 return line 

2316 maybe_magic, *rest = line.split(maxsplit=1) 

2317 if not rest: 

2318 return line 

2319 args = rest[0] 

2320 known_magics = self.shell.magics_manager.lsmagic() 

2321 line_magics = known_magics["line"] 

2322 magic_name = maybe_magic.lstrip(self.magic_escape) 

2323 if magic_name not in line_magics: 

2324 return line 

2325 

2326 if not maybe_magic.startswith(self.magic_escape): 

2327 all_variables = [*self.namespace.keys(), *self.global_namespace.keys()] 

2328 if magic_name in all_variables: 

2329 # short circuit if we see a line starting with say `time` 

2330 # but time is defined as a variable (in addition to being 

2331 # a magic). In these cases users need to use explicit `%time`. 

2332 return line 

2333 

2334 magic_method = line_magics[magic_name] 

2335 

2336 try: 

2337 if magic_name == "timeit": 

2338 opts, stmt = magic_method.__self__.parse_options( 

2339 args, 

2340 "n:r:tcp:qov:", 

2341 posix=False, 

2342 strict=False, 

2343 preserve_non_opts=True, 

2344 ) 

2345 return stmt 

2346 elif magic_name == "prun": 

2347 opts, stmt = magic_method.__self__.parse_options( 

2348 args, "D:l:rs:T:q", list_all=True, posix=False 

2349 ) 

2350 return stmt 

2351 elif hasattr(magic_method, "parser") and getattr( 

2352 magic_method, "has_arguments", False 

2353 ): 

2354 # e.g. %debug, %time 

2355 args, extra = magic_method.parser.parse_argstring(args, partial=True) 

2356 return " ".join(extra) 

2357 except UsageError: 

2358 return line 

2359 

2360 return line 

2361 

2362 @context_matcher() 

2363 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2364 """Match magics.""" 

2365 

2366 # Get all shell magics now rather than statically, so magics loaded at 

2367 # runtime show up too. 

2368 text = context.token 

2369 lsm = self.shell.magics_manager.lsmagic() 

2370 line_magics = lsm['line'] 

2371 cell_magics = lsm['cell'] 

2372 pre = self.magic_escape 

2373 pre2 = pre + pre 

2374 

2375 explicit_magic = text.startswith(pre) 

2376 

2377 # Completion logic: 

2378 # - user gives %%: only do cell magics 

2379 # - user gives %: do both line and cell magics 

2380 # - no prefix: do both 

2381 # In other words, line magics are skipped if the user gives %% explicitly 

2382 # 

2383 # We also exclude magics that match any currently visible names: 

2384 # https://github.com/ipython/ipython/issues/4877, unless the user has 

2385 # typed a %: 

2386 # https://github.com/ipython/ipython/issues/10754 

2387 bare_text = text.lstrip(pre) 

2388 global_matches = self.global_matches(bare_text) 

2389 if not explicit_magic: 

2390 def matches(magic): 

2391 """ 

2392 Filter magics, in particular remove magics that match 

2393 a name present in global namespace. 

2394 """ 

2395 return ( magic.startswith(bare_text) and 

2396 magic not in global_matches ) 

2397 else: 

2398 def matches(magic): 

2399 return magic.startswith(bare_text) 

2400 

2401 completions = [pre2 + m for m in cell_magics if matches(m)] 

2402 if not text.startswith(pre2): 

2403 completions += [pre + m for m in line_magics if matches(m)] 

2404 

2405 is_magic_prefix = len(text) > 0 and text[0] == "%" 

2406 

2407 return { 

2408 "completions": [ 

2409 SimpleCompletion(text=comp, type="magic") for comp in completions 

2410 ], 

2411 "suppress": is_magic_prefix and len(completions) > 0, 

2412 } 

2413 

2414 @context_matcher() 

2415 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2416 """Match class names and attributes for %config magic.""" 

2417 # NOTE: uses `line_buffer` equivalent for compatibility 

2418 matches = self.magic_config_matches(context.line_with_cursor) 

2419 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2420 

2421 def magic_config_matches(self, text: str) -> list[str]: 

2422 """Match class names and attributes for %config magic. 

2423 

2424 .. deprecated:: 8.6 

2425 You can use :meth:`magic_config_matcher` instead. 

2426 """ 

2427 texts = text.strip().split() 

2428 

2429 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'): 

2430 # get all configuration classes 

2431 classes = sorted(set([ c for c in self.shell.configurables 

2432 if c.__class__.class_traits(config=True) 

2433 ]), key=lambda x: x.__class__.__name__) 

2434 classnames = [ c.__class__.__name__ for c in classes ] 

2435 

2436 # return all classnames if config or %config is given 

2437 if len(texts) == 1: 

2438 return classnames 

2439 

2440 # match classname 

2441 classname_texts = texts[1].split('.') 

2442 classname = classname_texts[0] 

2443 classname_matches = [ c for c in classnames 

2444 if c.startswith(classname) ] 

2445 

2446 # return matched classes or the matched class with attributes 

2447 if texts[1].find('.') < 0: 

2448 return classname_matches 

2449 elif len(classname_matches) == 1 and \ 

2450 classname_matches[0] == classname: 

2451 cls = classes[classnames.index(classname)].__class__ 

2452 help = cls.class_get_help() 

2453 # strip leading '--' from cl-args: 

2454 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help) 

2455 return [ attr.split('=')[0] 

2456 for attr in help.strip().splitlines() 

2457 if attr.startswith(texts[1]) ] 

2458 return [] 

2459 

2460 @context_matcher() 

2461 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2462 """Match color schemes for %colors magic.""" 

2463 text = context.line_with_cursor 

2464 texts = text.split() 

2465 if text.endswith(' '): 

2466 # .split() strips off the trailing whitespace. Add '' back 

2467 # so that: '%colors ' -> ['%colors', ''] 

2468 texts.append('') 

2469 

2470 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'): 

2471 prefix = texts[1] 

2472 return SimpleMatcherResult( 

2473 completions=[ 

2474 SimpleCompletion(color, type="param") 

2475 for color in theme_table.keys() 

2476 if color.startswith(prefix) 

2477 ], 

2478 suppress=False, 

2479 ) 

2480 return SimpleMatcherResult( 

2481 completions=[], 

2482 suppress=False, 

2483 ) 

2484 

2485 @context_matcher(identifier="IPCompleter.jedi_matcher") 

2486 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult: 

2487 matches = self._jedi_matches( 

2488 cursor_column=context.cursor_position, 

2489 cursor_line=context.cursor_line, 

2490 text=context.full_text, 

2491 ) 

2492 return { 

2493 "completions": matches, 

2494 # static analysis should not suppress other matcher 

2495 # NOTE: file_matcher is automatically suppressed on attribute completions 

2496 "suppress": False, 

2497 } 

2498 

2499 def _jedi_matches( 

2500 self, cursor_column: int, cursor_line: int, text: str 

2501 ) -> Iterator[_JediCompletionLike]: 

2502 """ 

2503 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and 

2504 cursor position. 

2505 

2506 Parameters 

2507 ---------- 

2508 cursor_column : int 

2509 column position of the cursor in ``text``, 0-indexed. 

2510 cursor_line : int 

2511 line position of the cursor in ``text``, 0-indexed 

2512 text : str 

2513 text to complete 

2514 

2515 Notes 

2516 ----- 

2517 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion` 

2518 object containing a string with the Jedi debug information attached. 

2519 

2520 .. deprecated:: 8.6 

2521 You can use :meth:`_jedi_matcher` instead. 

2522 """ 

2523 namespaces = [self.namespace] 

2524 if self.global_namespace is not None: 

2525 namespaces.append(self.global_namespace) 

2526 

2527 completion_filter = lambda x:x 

2528 offset = cursor_to_position(text, cursor_line, cursor_column) 

2529 # filter output if we are completing for object members 

2530 if offset: 

2531 pre = text[offset-1] 

2532 if pre == '.': 

2533 if self.omit__names == 2: 

2534 completion_filter = lambda c:not c.name.startswith('_') 

2535 elif self.omit__names == 1: 

2536 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__')) 

2537 elif self.omit__names == 0: 

2538 completion_filter = lambda x:x 

2539 else: 

2540 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names)) 

2541 

2542 interpreter = jedi.Interpreter(text[:offset], namespaces) 

2543 try_jedi = True 

2544 

2545 try: 

2546 # find the first token in the current tree -- if it is a ' or " then we are in a string 

2547 completing_string = False 

2548 try: 

2549 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value')) 

2550 except StopIteration: 

2551 pass 

2552 else: 

2553 # note the value may be ', ", or it may also be ''' or """, or 

2554 # in some cases, """what/you/typed..., but all of these are 

2555 # strings. 

2556 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'} 

2557 

2558 # if we are in a string jedi is likely not the right candidate for 

2559 # now. Skip it. 

2560 try_jedi = not completing_string 

2561 except Exception as e: 

2562 # many of things can go wrong, we are using private API just don't crash. 

2563 if self.debug: 

2564 print("Error detecting if completing a non-finished string :", e, '|') 

2565 

2566 if not try_jedi: 

2567 return iter([]) 

2568 try: 

2569 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1)) 

2570 except Exception as e: 

2571 if self.debug: 

2572 return iter( 

2573 [ 

2574 _FakeJediCompletion( 

2575 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""' 

2576 % (e) 

2577 ) 

2578 ] 

2579 ) 

2580 else: 

2581 return iter([]) 

2582 

2583 class _CompletionContextType(enum.Enum): 

2584 ATTRIBUTE = "attribute" # For attribute completion 

2585 GLOBAL = "global" # For global completion 

2586 

2587 def _determine_completion_context(self, line): 

2588 """ 

2589 Determine whether the cursor is in an attribute or global completion context. 

2590 """ 

2591 # Cursor in string/comment → GLOBAL. 

2592 is_string, is_in_expression = self._is_in_string_or_comment(line) 

2593 if is_string and not is_in_expression: 

2594 return self._CompletionContextType.GLOBAL 

2595 

2596 # If we're in a template string expression, handle specially 

2597 if is_string and is_in_expression: 

2598 # Extract the expression part - look for the last { that isn't closed 

2599 expr_start = line.rfind("{") 

2600 if expr_start >= 0: 

2601 # We're looking at the expression inside a template string 

2602 expr = line[expr_start + 1 :] 

2603 # Recursively determine the context of the expression 

2604 return self._determine_completion_context(expr) 

2605 

2606 # Handle plain number literals - should be global context 

2607 # Ex: 3. -42.14 but not 3.1. 

2608 if re.search(r"(?<!\w)(?<!\d\.)([-+]?\d+\.(\d+)?)(?!\w)$", line): 

2609 return self._CompletionContextType.GLOBAL 

2610 

2611 # Handle all other attribute matches np.ran, d[0].k, (a,b).count 

2612 chain_match = re.search(r".*(.+(?<!\s)\.(?:[a-zA-Z]\w*)?)$", line) 

2613 if chain_match: 

2614 return self._CompletionContextType.ATTRIBUTE 

2615 

2616 return self._CompletionContextType.GLOBAL 

2617 

2618 def _is_completing_in_cli_context(self, text: str) -> bool: 

2619 """ 

2620 Determine if we are completing in a CLI alias, line magic, or bang expression context. 

2621 """ 

2622 stripped = text.lstrip() 

2623 if stripped.startswith("!") or stripped.startswith("%"): 

2624 return True 

2625 # Check for CLI aliases 

2626 try: 

2627 tokens = stripped.split(None, 1) 

2628 if not tokens: 

2629 return False 

2630 first_token = tokens[0] 

2631 

2632 # Must have arguments after the command for this to apply 

2633 if len(tokens) < 2: 

2634 return False 

2635 

2636 # Check if first token is a known alias 

2637 if not any( 

2638 alias[0] == first_token for alias in self.shell.alias_manager.aliases 

2639 ): 

2640 return False 

2641 

2642 try: 

2643 if first_token in self.shell.user_ns: 

2644 # There's a variable defined, so the alias is overshadowed 

2645 return False 

2646 except (AttributeError, KeyError): 

2647 pass 

2648 

2649 return True 

2650 except Exception: 

2651 return False 

2652 

2653 def _is_in_string_or_comment(self, text): 

2654 """ 

2655 Determine if the cursor is inside a string or comment. 

2656 Returns (is_string, is_in_expression) tuple: 

2657 - is_string: True if in any kind of string 

2658 - is_in_expression: True if inside an f-string/t-string expression 

2659 """ 

2660 in_single_quote = False 

2661 in_double_quote = False 

2662 in_triple_single = False 

2663 in_triple_double = False 

2664 in_template_string = False # Covers both f-strings and t-strings 

2665 in_expression = False # For expressions in f/t-strings 

2666 expression_depth = 0 # Track nested braces in expressions 

2667 i = 0 

2668 

2669 while i < len(text): 

2670 # Check for f-string or t-string start 

2671 if ( 

2672 i + 1 < len(text) 

2673 and text[i] in ("f", "t") 

2674 and (text[i + 1] == '"' or text[i + 1] == "'") 

2675 and not ( 

2676 in_single_quote 

2677 or in_double_quote 

2678 or in_triple_single 

2679 or in_triple_double 

2680 ) 

2681 ): 

2682 in_template_string = True 

2683 i += 1 # Skip the 'f' or 't' 

2684 

2685 # Handle triple quotes 

2686 if i + 2 < len(text): 

2687 if ( 

2688 text[i : i + 3] == '"""' 

2689 and not in_single_quote 

2690 and not in_triple_single 

2691 ): 

2692 in_triple_double = not in_triple_double 

2693 if not in_triple_double: 

2694 in_template_string = False 

2695 i += 3 

2696 continue 

2697 if ( 

2698 text[i : i + 3] == "'''" 

2699 and not in_double_quote 

2700 and not in_triple_double 

2701 ): 

2702 in_triple_single = not in_triple_single 

2703 if not in_triple_single: 

2704 in_template_string = False 

2705 i += 3 

2706 continue 

2707 

2708 # Handle escapes 

2709 if text[i] == "\\" and i + 1 < len(text): 

2710 i += 2 

2711 continue 

2712 

2713 # Handle nested braces within f-strings 

2714 if in_template_string: 

2715 # Special handling for consecutive opening braces 

2716 if i + 1 < len(text) and text[i : i + 2] == "{{": 

2717 i += 2 

2718 continue 

2719 

2720 # Detect start of an expression 

2721 if text[i] == "{": 

2722 # Only increment depth and mark as expression if not already in an expression 

2723 # or if we're at a top-level nested brace 

2724 if not in_expression or (in_expression and expression_depth == 0): 

2725 in_expression = True 

2726 expression_depth += 1 

2727 i += 1 

2728 continue 

2729 

2730 # Detect end of an expression 

2731 if text[i] == "}": 

2732 expression_depth -= 1 

2733 if expression_depth <= 0: 

2734 in_expression = False 

2735 expression_depth = 0 

2736 i += 1 

2737 continue 

2738 

2739 in_triple_quote = in_triple_single or in_triple_double 

2740 

2741 # Handle quotes - also reset template string when closing quotes are encountered 

2742 if text[i] == '"' and not in_single_quote and not in_triple_quote: 

2743 in_double_quote = not in_double_quote 

2744 if not in_double_quote and not in_triple_quote: 

2745 in_template_string = False 

2746 elif text[i] == "'" and not in_double_quote and not in_triple_quote: 

2747 in_single_quote = not in_single_quote 

2748 if not in_single_quote and not in_triple_quote: 

2749 in_template_string = False 

2750 

2751 # Check for comment 

2752 if text[i] == "#" and not ( 

2753 in_single_quote or in_double_quote or in_triple_quote 

2754 ): 

2755 return True, False 

2756 

2757 i += 1 

2758 

2759 is_string = ( 

2760 in_single_quote or in_double_quote or in_triple_single or in_triple_double 

2761 ) 

2762 

2763 # Return tuple (is_string, is_in_expression) 

2764 return ( 

2765 is_string or (in_template_string and not in_expression), 

2766 in_expression and expression_depth > 0, 

2767 ) 

2768 

2769 @context_matcher() 

2770 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2771 """Match attributes or global python names""" 

2772 text = context.text_until_cursor 

2773 text = self._extract_code(text) 

2774 in_cli_context = self._is_completing_in_cli_context(text) 

2775 if in_cli_context: 

2776 completion_type = self._CompletionContextType.GLOBAL 

2777 else: 

2778 completion_type = self._determine_completion_context(text) 

2779 if completion_type == self._CompletionContextType.ATTRIBUTE: 

2780 try: 

2781 matches, fragment = self._attr_matches( 

2782 text, include_prefix=False, context=context 

2783 ) 

2784 if text.endswith(".") and self.omit__names: 

2785 if self.omit__names == 1: 

2786 # true if txt is _not_ a __ name, false otherwise: 

2787 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None 

2788 else: 

2789 # true if txt is _not_ a _ name, false otherwise: 

2790 no__name = ( 

2791 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :]) 

2792 is None 

2793 ) 

2794 matches = filter(no__name, matches) 

2795 matches = _convert_matcher_v1_result_to_v2( 

2796 matches, type="attribute", fragment=fragment 

2797 ) 

2798 return matches 

2799 except NameError: 

2800 # catches <undefined attributes>.<tab> 

2801 return SimpleMatcherResult(completions=[], suppress=False) 

2802 else: 

2803 try: 

2804 matches = self.global_matches(context.token, context=context) 

2805 except TypeError: 

2806 matches = self.global_matches(context.token) 

2807 # TODO: maybe distinguish between functions, modules and just "variables" 

2808 return SimpleMatcherResult( 

2809 completions=[ 

2810 SimpleCompletion(text=match, type="variable") for match in matches 

2811 ], 

2812 suppress=False, 

2813 ) 

2814 

2815 @completion_matcher(api_version=1) 

2816 def python_matches(self, text: str) -> Iterable[str]: 

2817 """Match attributes or global python names. 

2818 

2819 .. deprecated:: 8.27 

2820 You can use :meth:`python_matcher` instead.""" 

2821 if "." in text: 

2822 try: 

2823 matches = self.attr_matches(text) 

2824 if text.endswith('.') and self.omit__names: 

2825 if self.omit__names == 1: 

2826 # true if txt is _not_ a __ name, false otherwise: 

2827 no__name = (lambda txt: 

2828 re.match(r'.*\.__.*?__',txt) is None) 

2829 else: 

2830 # true if txt is _not_ a _ name, false otherwise: 

2831 no__name = (lambda txt: 

2832 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None) 

2833 matches = filter(no__name, matches) 

2834 except NameError: 

2835 # catches <undefined attributes>.<tab> 

2836 matches = [] 

2837 else: 

2838 matches = self.global_matches(text) 

2839 return matches 

2840 

2841 def _default_arguments_from_docstring(self, doc): 

2842 """Parse the first line of docstring for call signature. 

2843 

2844 Docstring should be of the form 'min(iterable[, key=func])\n'. 

2845 It can also parse cython docstring of the form 

2846 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'. 

2847 """ 

2848 if doc is None: 

2849 return [] 

2850 

2851 #care only the firstline 

2852 line = doc.lstrip().splitlines()[0] 

2853 

2854 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*') 

2855 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]' 

2856 sig = self.docstring_sig_re.search(line) 

2857 if sig is None: 

2858 return [] 

2859 # iterable[, key=func]' -> ['iterable[' ,' key=func]'] 

2860 sig = sig.groups()[0].split(',') 

2861 ret = [] 

2862 for s in sig: 

2863 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)') 

2864 ret += self.docstring_kwd_re.findall(s) 

2865 return ret 

2866 

2867 def _default_arguments(self, obj): 

2868 """Return the list of default arguments of obj if it is callable, 

2869 or empty list otherwise.""" 

2870 call_obj = obj 

2871 ret = [] 

2872 if inspect.isbuiltin(obj): 

2873 pass 

2874 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)): 

2875 if inspect.isclass(obj): 

2876 #for cython embedsignature=True the constructor docstring 

2877 #belongs to the object itself not __init__ 

2878 ret += self._default_arguments_from_docstring( 

2879 getattr(obj, '__doc__', '')) 

2880 # for classes, check for __init__,__new__ 

2881 call_obj = (getattr(obj, '__init__', None) or 

2882 getattr(obj, '__new__', None)) 

2883 # for all others, check if they are __call__able 

2884 elif hasattr(obj, '__call__'): 

2885 call_obj = obj.__call__ 

2886 ret += self._default_arguments_from_docstring( 

2887 getattr(call_obj, '__doc__', '')) 

2888 

2889 _keeps = (inspect.Parameter.KEYWORD_ONLY, 

2890 inspect.Parameter.POSITIONAL_OR_KEYWORD) 

2891 

2892 try: 

2893 sig = inspect.signature(obj) 

2894 ret.extend(k for k, v in sig.parameters.items() if 

2895 v.kind in _keeps) 

2896 except ValueError: 

2897 pass 

2898 

2899 return list(set(ret)) 

2900 

2901 @context_matcher() 

2902 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

2903 """Match named parameters (kwargs) of the last open function.""" 

2904 matches = self.python_func_kw_matches(context.token) 

2905 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param") 

2906 

2907 def python_func_kw_matches(self, text): 

2908 """Match named parameters (kwargs) of the last open function. 

2909 

2910 .. deprecated:: 8.6 

2911 You can use :meth:`python_func_kw_matcher` instead. 

2912 """ 

2913 

2914 if "." in text: # a parameter cannot be dotted 

2915 return [] 

2916 try: regexp = self.__funcParamsRegex 

2917 except AttributeError: 

2918 regexp = self.__funcParamsRegex = re.compile(r''' 

2919 '.*?(?<!\\)' | # single quoted strings or 

2920 ".*?(?<!\\)" | # double quoted strings or 

2921 \w+ | # identifier 

2922 \S # other characters 

2923 ''', re.VERBOSE | re.DOTALL) 

2924 # 1. find the nearest identifier that comes before an unclosed 

2925 # parenthesis before the cursor 

2926 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo" 

2927 tokens = regexp.findall(self.text_until_cursor) 

2928 iterTokens = reversed(tokens) 

2929 openPar = 0 

2930 

2931 for token in iterTokens: 

2932 if token == ')': 

2933 openPar -= 1 

2934 elif token == '(': 

2935 openPar += 1 

2936 if openPar > 0: 

2937 # found the last unclosed parenthesis 

2938 break 

2939 else: 

2940 return [] 

2941 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" ) 

2942 ids = [] 

2943 isId = re.compile(r'\w+$').match 

2944 

2945 while True: 

2946 try: 

2947 ids.append(next(iterTokens)) 

2948 if not isId(ids[-1]): 

2949 ids.pop() 

2950 break 

2951 if not next(iterTokens) == '.': 

2952 break 

2953 except StopIteration: 

2954 break 

2955 

2956 # Find all named arguments already assigned to, as to avoid suggesting 

2957 # them again 

2958 usedNamedArgs = set() 

2959 par_level = -1 

2960 for token, next_token in zip(tokens, tokens[1:]): 

2961 if token == '(': 

2962 par_level += 1 

2963 elif token == ')': 

2964 par_level -= 1 

2965 

2966 if par_level != 0: 

2967 continue 

2968 

2969 if next_token != '=': 

2970 continue 

2971 

2972 usedNamedArgs.add(token) 

2973 

2974 argMatches = [] 

2975 try: 

2976 callableObj = '.'.join(ids[::-1]) 

2977 namedArgs = self._default_arguments(eval(callableObj, 

2978 self.namespace)) 

2979 

2980 # Remove used named arguments from the list, no need to show twice 

2981 for namedArg in set(namedArgs) - usedNamedArgs: 

2982 if namedArg.startswith(text): 

2983 argMatches.append("%s=" %namedArg) 

2984 except: 

2985 pass 

2986 

2987 return argMatches 

2988 

2989 @staticmethod 

2990 def _get_keys(obj: Any) -> list[Any]: 

2991 # Objects can define their own completions by defining an 

2992 # _ipy_key_completions_() method. 

2993 method = get_real_method(obj, '_ipython_key_completions_') 

2994 if method is not None: 

2995 return method() 

2996 

2997 # Special case some common in-memory dict-like types 

2998 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"): 

2999 try: 

3000 return list(obj.keys()) 

3001 except Exception: 

3002 return [] 

3003 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"): 

3004 try: 

3005 return list(obj.obj.keys()) 

3006 except Exception: 

3007 return [] 

3008 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\ 

3009 _safe_isinstance(obj, 'numpy', 'void'): 

3010 return obj.dtype.names or [] 

3011 return [] 

3012 

3013 @context_matcher() 

3014 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

3015 """Match string keys in a dictionary, after e.g. ``foo[``.""" 

3016 matches = self.dict_key_matches(context.token) 

3017 return _convert_matcher_v1_result_to_v2( 

3018 matches, type="dict key", suppress_if_matches=True 

3019 ) 

3020 

3021 def dict_key_matches(self, text: str) -> list[str]: 

3022 """Match string keys in a dictionary, after e.g. ``foo[``. 

3023 

3024 .. deprecated:: 8.6 

3025 You can use :meth:`dict_key_matcher` instead. 

3026 """ 

3027 

3028 # Short-circuit on closed dictionary (regular expression would 

3029 # not match anyway, but would take quite a while). 

3030 if self.text_until_cursor.strip().endswith("]"): 

3031 return [] 

3032 

3033 match = DICT_MATCHER_REGEX.search(self.text_until_cursor) 

3034 

3035 if match is None: 

3036 return [] 

3037 

3038 expr, prior_tuple_keys, key_prefix = match.groups() 

3039 

3040 obj = self._evaluate_expr(expr) 

3041 

3042 if obj is not_found: 

3043 return [] 

3044 

3045 keys = self._get_keys(obj) 

3046 if not keys: 

3047 return keys 

3048 

3049 tuple_prefix = guarded_eval( 

3050 prior_tuple_keys, 

3051 EvaluationContext( 

3052 globals=self.global_namespace, 

3053 locals=self.namespace, 

3054 evaluation=self.evaluation, # type: ignore 

3055 in_subscript=True, 

3056 auto_import=self._auto_import, 

3057 policy_overrides=self.policy_overrides, 

3058 ), 

3059 ) 

3060 

3061 closing_quote, token_offset, matches = match_dict_keys( 

3062 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix 

3063 ) 

3064 if not matches: 

3065 return [] 

3066 

3067 # get the cursor position of 

3068 # - the text being completed 

3069 # - the start of the key text 

3070 # - the start of the completion 

3071 text_start = len(self.text_until_cursor) - len(text) 

3072 if key_prefix: 

3073 key_start = match.start(3) 

3074 completion_start = key_start + token_offset 

3075 else: 

3076 key_start = completion_start = match.end() 

3077 

3078 # grab the leading prefix, to make sure all completions start with `text` 

3079 if text_start > key_start: 

3080 leading = '' 

3081 else: 

3082 leading = text[text_start:completion_start] 

3083 

3084 # append closing quote and bracket as appropriate 

3085 # this is *not* appropriate if the opening quote or bracket is outside 

3086 # the text given to this method, e.g. `d["""a\nt 

3087 can_close_quote = False 

3088 can_close_bracket = False 

3089 

3090 continuation = self.line_buffer[len(self.text_until_cursor) :].strip() 

3091 

3092 if continuation.startswith(closing_quote): 

3093 # do not close if already closed, e.g. `d['a<tab>'` 

3094 continuation = continuation[len(closing_quote) :] 

3095 else: 

3096 can_close_quote = True 

3097 

3098 continuation = continuation.strip() 

3099 

3100 # e.g. `pandas.DataFrame` has different tuple indexer behaviour, 

3101 # handling it is out of scope, so let's avoid appending suffixes. 

3102 has_known_tuple_handling = isinstance(obj, dict) 

3103 

3104 can_close_bracket = ( 

3105 not continuation.startswith("]") and self.auto_close_dict_keys 

3106 ) 

3107 can_close_tuple_item = ( 

3108 not continuation.startswith(",") 

3109 and has_known_tuple_handling 

3110 and self.auto_close_dict_keys 

3111 ) 

3112 can_close_quote = can_close_quote and self.auto_close_dict_keys 

3113 

3114 # fast path if closing quote should be appended but not suffix is allowed 

3115 if not can_close_quote and not can_close_bracket and closing_quote: 

3116 return [leading + k for k in matches] 

3117 

3118 results = [] 

3119 

3120 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM 

3121 

3122 for k, state_flag in matches.items(): 

3123 result = leading + k 

3124 if can_close_quote and closing_quote: 

3125 result += closing_quote 

3126 

3127 if state_flag == end_of_tuple_or_item: 

3128 # We do not know which suffix to add, 

3129 # e.g. both tuple item and string 

3130 # match this item. 

3131 pass 

3132 

3133 if state_flag in end_of_tuple_or_item and can_close_bracket: 

3134 result += "]" 

3135 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item: 

3136 result += ", " 

3137 results.append(result) 

3138 return results 

3139 

3140 @context_matcher() 

3141 def unicode_name_matcher(self, context: CompletionContext) -> SimpleMatcherResult: 

3142 """Match Latex-like syntax for unicode characters base 

3143 on the name of the character. 

3144 

3145 This does ``\\GREEK SMALL LETTER ETA`` -> ``η`` 

3146 

3147 Works only on valid python 3 identifier, or on combining characters that 

3148 will combine to form a valid identifier. 

3149 """ 

3150 

3151 text = context.text_until_cursor 

3152 

3153 slashpos = text.rfind('\\') 

3154 if slashpos > -1: 

3155 s = text[slashpos+1:] 

3156 try : 

3157 unic = unicodedata.lookup(s) 

3158 # allow combining chars 

3159 if ('a'+unic).isidentifier(): 

3160 return { 

3161 "completions": [SimpleCompletion(text=unic, type="unicode")], 

3162 "suppress": True, 

3163 "matched_fragment": "\\" + s, 

3164 } 

3165 except KeyError: 

3166 pass 

3167 return { 

3168 "completions": [], 

3169 "suppress": False, 

3170 } 

3171 

3172 @context_matcher() 

3173 def latex_name_matcher(self, context: CompletionContext): 

3174 """Match Latex syntax for unicode characters. 

3175 

3176 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3177 """ 

3178 fragment, matches = self.latex_matches(context.text_until_cursor) 

3179 return _convert_matcher_v1_result_to_v2( 

3180 matches, type="latex", fragment=fragment, suppress_if_matches=True 

3181 ) 

3182 

3183 def latex_matches(self, text: str) -> tuple[str, Sequence[str]]: 

3184 """Match Latex syntax for unicode characters. 

3185 

3186 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α`` 

3187 

3188 .. deprecated:: 8.6 

3189 You can use :meth:`latex_name_matcher` instead. 

3190 """ 

3191 slashpos = text.rfind('\\') 

3192 if slashpos > -1: 

3193 s = text[slashpos:] 

3194 if s in latex_symbols: 

3195 # Try to complete a full latex symbol to unicode 

3196 # \\alpha -> α 

3197 return s, [latex_symbols[s]] 

3198 else: 

3199 # If a user has partially typed a latex symbol, give them 

3200 # a full list of options \al -> [\aleph, \alpha] 

3201 matches = [k for k in latex_symbols if k.startswith(s)] 

3202 if matches: 

3203 return s, matches 

3204 return '', () 

3205 

3206 @context_matcher() 

3207 def custom_completer_matcher(self, context): 

3208 """Dispatch custom completer. 

3209 

3210 If a match is found, suppresses all other matchers except for Jedi. 

3211 """ 

3212 matches = self.dispatch_custom_completer(context.token) or [] 

3213 result = _convert_matcher_v1_result_to_v2( 

3214 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True 

3215 ) 

3216 result["ordered"] = True 

3217 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)} 

3218 return result 

3219 

3220 def dispatch_custom_completer(self, text): 

3221 """ 

3222 .. deprecated:: 8.6 

3223 You can use :meth:`custom_completer_matcher` instead. 

3224 """ 

3225 if not self.custom_completers: 

3226 return 

3227 

3228 line = self.line_buffer 

3229 if not line.strip(): 

3230 return None 

3231 

3232 # Create a little structure to pass all the relevant information about 

3233 # the current completion to any custom completer. 

3234 event = SimpleNamespace() 

3235 event.line = line 

3236 event.symbol = text 

3237 cmd = line.split(None,1)[0] 

3238 event.command = cmd 

3239 event.text_until_cursor = self.text_until_cursor 

3240 

3241 # for foo etc, try also to find completer for %foo 

3242 if not cmd.startswith(self.magic_escape): 

3243 try_magic = self.custom_completers.s_matches( 

3244 self.magic_escape + cmd) 

3245 else: 

3246 try_magic = [] 

3247 

3248 for c in itertools.chain(self.custom_completers.s_matches(cmd), 

3249 try_magic, 

3250 self.custom_completers.flat_matches(self.text_until_cursor)): 

3251 try: 

3252 res = c(event) 

3253 if res: 

3254 # first, try case sensitive match 

3255 withcase = [r for r in res if r.startswith(text)] 

3256 if withcase: 

3257 return withcase 

3258 # if none, then case insensitive ones are ok too 

3259 text_low = text.lower() 

3260 return [r for r in res if r.lower().startswith(text_low)] 

3261 except TryNext: 

3262 pass 

3263 except KeyboardInterrupt: 

3264 """ 

3265 If custom completer take too long, 

3266 let keyboard interrupt abort and return nothing. 

3267 """ 

3268 break 

3269 

3270 return None 

3271 

3272 def completions(self, text: str, offset: int)->Iterator[Completion]: 

3273 """ 

3274 Returns an iterator over the possible completions 

3275 

3276 .. warning:: 

3277 

3278 Unstable 

3279 

3280 This function is unstable, API may change without warning. 

3281 It will also raise unless use in proper context manager. 

3282 

3283 Parameters 

3284 ---------- 

3285 text : str 

3286 Full text of the current input, multi line string. 

3287 offset : int 

3288 Integer representing the position of the cursor in ``text``. Offset 

3289 is 0-based indexed. 

3290 

3291 Yields 

3292 ------ 

3293 Completion 

3294 

3295 Notes 

3296 ----- 

3297 The cursor on a text can either be seen as being "in between" 

3298 characters or "On" a character depending on the interface visible to 

3299 the user. For consistency the cursor being on "in between" characters X 

3300 and Y is equivalent to the cursor being "on" character Y, that is to say 

3301 the character the cursor is on is considered as being after the cursor. 

3302 

3303 Combining characters may span more that one position in the 

3304 text. 

3305 

3306 .. note:: 

3307 

3308 If ``IPCompleter.debug`` is :py:data:`True` will yield a ``--jedi/ipython--`` 

3309 fake Completion token to distinguish completion returned by Jedi 

3310 and usual IPython completion. 

3311 

3312 .. note:: 

3313 

3314 Completions are not completely deduplicated yet. If identical 

3315 completions are coming from different sources this function does not 

3316 ensure that each completion object will only be present once. 

3317 """ 

3318 warnings.warn("_complete is a provisional API (as of IPython 6.0). " 

3319 "It may change without warnings. " 

3320 "Use in corresponding context manager.", 

3321 category=ProvisionalCompleterWarning, stacklevel=2) 

3322 

3323 seen = set() 

3324 profiler:Optional[cProfile.Profile] 

3325 try: 

3326 if self.profile_completions: 

3327 import cProfile 

3328 profiler = cProfile.Profile() 

3329 profiler.enable() 

3330 else: 

3331 profiler = None 

3332 

3333 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000): 

3334 if c and (c in seen): 

3335 continue 

3336 yield c 

3337 seen.add(c) 

3338 except KeyboardInterrupt: 

3339 """if completions take too long and users send keyboard interrupt, 

3340 do not crash and return ASAP. """ 

3341 pass 

3342 finally: 

3343 if profiler is not None: 

3344 profiler.disable() 

3345 ensure_dir_exists(self.profiler_output_dir) 

3346 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4())) 

3347 print("Writing profiler output to", output_path) 

3348 profiler.dump_stats(output_path) 

3349 

3350 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]: 

3351 """ 

3352 Core completion module.Same signature as :any:`completions`, with the 

3353 extra `timeout` parameter (in seconds). 

3354 

3355 Computing jedi's completion ``.type`` can be quite expensive (it is a 

3356 lazy property) and can require some warm-up, more warm up than just 

3357 computing the ``name`` of a completion. The warm-up can be : 

3358 

3359 - Long warm-up the first time a module is encountered after 

3360 install/update: actually build parse/inference tree. 

3361 

3362 - first time the module is encountered in a session: load tree from 

3363 disk. 

3364 

3365 We don't want to block completions for tens of seconds so we give the 

3366 completer a "budget" of ``_timeout`` seconds per invocation to compute 

3367 completions types, the completions that have not yet been computed will 

3368 be marked as "unknown" an will have a chance to be computed next round 

3369 are things get cached. 

3370 

3371 Keep in mind that Jedi is not the only thing treating the completion so 

3372 keep the timeout short-ish as if we take more than 0.3 second we still 

3373 have lots of processing to do. 

3374 

3375 """ 

3376 deadline = time.monotonic() + _timeout 

3377 

3378 before = full_text[:offset] 

3379 cursor_line, cursor_column = position_to_cursor(full_text, offset) 

3380 

3381 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3382 

3383 def is_non_jedi_result( 

3384 result: MatcherResult, identifier: str 

3385 ) -> TypeGuard[SimpleMatcherResult]: 

3386 return identifier != jedi_matcher_id 

3387 

3388 results = self._complete( 

3389 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column 

3390 ) 

3391 

3392 non_jedi_results: dict[str, SimpleMatcherResult] = { 

3393 identifier: result 

3394 for identifier, result in results.items() 

3395 if is_non_jedi_result(result, identifier) 

3396 } 

3397 

3398 jedi_matches = ( 

3399 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"] 

3400 if jedi_matcher_id in results 

3401 else () 

3402 ) 

3403 

3404 iter_jm = iter(jedi_matches) 

3405 if _timeout: 

3406 for jm in iter_jm: 

3407 try: 

3408 type_ = jm.type 

3409 except Exception: 

3410 if self.debug: 

3411 print("Error in Jedi getting type of ", jm) 

3412 type_ = None 

3413 delta = len(jm.name_with_symbols) - len(jm.complete) 

3414 if type_ == 'function': 

3415 signature = _make_signature(jm) 

3416 else: 

3417 signature = '' 

3418 yield Completion(start=offset - delta, 

3419 end=offset, 

3420 text=jm.name_with_symbols, 

3421 type=type_, 

3422 signature=signature, 

3423 _origin='jedi') 

3424 

3425 if time.monotonic() > deadline: 

3426 break 

3427 

3428 for jm in iter_jm: 

3429 delta = len(jm.name_with_symbols) - len(jm.complete) 

3430 yield Completion( 

3431 start=offset - delta, 

3432 end=offset, 

3433 text=jm.name_with_symbols, 

3434 type=_UNKNOWN_TYPE, # don't compute type for speed 

3435 _origin="jedi", 

3436 signature="", 

3437 ) 

3438 

3439 # TODO: 

3440 # Suppress this, right now just for debug. 

3441 if jedi_matches and non_jedi_results and self.debug: 

3442 some_start_offset = before.rfind( 

3443 next(iter(non_jedi_results.values()))["matched_fragment"] 

3444 ) 

3445 yield Completion( 

3446 start=some_start_offset, 

3447 end=offset, 

3448 text="--jedi/ipython--", 

3449 _origin="debug", 

3450 type="none", 

3451 signature="", 

3452 ) 

3453 

3454 ordered: list[Completion] = [] 

3455 sortable: list[Completion] = [] 

3456 

3457 for origin, result in non_jedi_results.items(): 

3458 matched_text = result["matched_fragment"] 

3459 start_offset = before.rfind(matched_text) 

3460 is_ordered = result.get("ordered", False) 

3461 container = ordered if is_ordered else sortable 

3462 

3463 # I'm unsure if this is always true, so let's assert and see if it 

3464 # crash 

3465 assert before.endswith(matched_text) 

3466 

3467 for simple_completion in result["completions"]: 

3468 completion = Completion( 

3469 start=start_offset, 

3470 end=offset, 

3471 text=simple_completion.text, 

3472 _origin=origin, 

3473 signature="", 

3474 type=simple_completion.type or _UNKNOWN_TYPE, 

3475 ) 

3476 container.append(completion) 

3477 

3478 yield from list(self._deduplicate(ordered + self._sort(sortable)))[ 

3479 :MATCHES_LIMIT 

3480 ] 

3481 

3482 def complete( 

3483 self, text=None, line_buffer=None, cursor_pos=None 

3484 ) -> tuple[str, Sequence[str]]: 

3485 """Find completions for the given text and line context. 

3486 

3487 Note that both the text and the line_buffer are optional, but at least 

3488 one of them must be given. 

3489 

3490 Parameters 

3491 ---------- 

3492 text : string, optional 

3493 Text to perform the completion on. If not given, the line buffer 

3494 is split using the instance's CompletionSplitter object. 

3495 line_buffer : string, optional 

3496 If not given, the completer attempts to obtain the current line 

3497 buffer via readline. This keyword allows clients which are 

3498 requesting for text completions in non-readline contexts to inform 

3499 the completer of the entire text. 

3500 cursor_pos : int, optional 

3501 Index of the cursor in the full line buffer. Should be provided by 

3502 remote frontends where kernel has no access to frontend state. 

3503 

3504 Returns 

3505 ------- 

3506 Tuple of two items: 

3507 text : str 

3508 Text that was actually used in the completion. 

3509 matches : list 

3510 A list of completion matches. 

3511 

3512 Notes 

3513 ----- 

3514 This API is likely to be deprecated and replaced by 

3515 :any:`IPCompleter.completions` in the future. 

3516 

3517 """ 

3518 warnings.warn('`Completer.complete` is pending deprecation since ' 

3519 'IPython 6.0 and will be replaced by `Completer.completions`.', 

3520 PendingDeprecationWarning) 

3521 # potential todo, FOLD the 3rd throw away argument of _complete 

3522 # into the first 2 one. 

3523 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?) 

3524 # TODO: should we deprecate now, or does it stay? 

3525 

3526 results = self._complete( 

3527 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0 

3528 ) 

3529 

3530 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3531 

3532 return self._arrange_and_extract( 

3533 results, 

3534 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version? 

3535 skip_matchers={jedi_matcher_id}, 

3536 # this API does not support different start/end positions (fragments of token). 

3537 abort_if_offset_changes=True, 

3538 ) 

3539 

3540 def _arrange_and_extract( 

3541 self, 

3542 results: dict[str, MatcherResult], 

3543 skip_matchers: set[str], 

3544 abort_if_offset_changes: bool, 

3545 ): 

3546 sortable: list[AnyMatcherCompletion] = [] 

3547 ordered: list[AnyMatcherCompletion] = [] 

3548 most_recent_fragment = None 

3549 for identifier, result in results.items(): 

3550 if identifier in skip_matchers: 

3551 continue 

3552 if not result["completions"]: 

3553 continue 

3554 if not most_recent_fragment: 

3555 most_recent_fragment = result["matched_fragment"] 

3556 if ( 

3557 abort_if_offset_changes 

3558 and result["matched_fragment"] != most_recent_fragment 

3559 ): 

3560 break 

3561 if result.get("ordered", False): 

3562 ordered.extend(result["completions"]) 

3563 else: 

3564 sortable.extend(result["completions"]) 

3565 

3566 if not most_recent_fragment: 

3567 most_recent_fragment = "" # to satisfy typechecker (and just in case) 

3568 

3569 return most_recent_fragment, [ 

3570 m.text for m in self._deduplicate(ordered + self._sort(sortable)) 

3571 ] 

3572 

3573 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None, 

3574 full_text=None) -> _CompleteResult: 

3575 """ 

3576 Like complete but can also returns raw jedi completions as well as the 

3577 origin of the completion text. This could (and should) be made much 

3578 cleaner but that will be simpler once we drop the old (and stateful) 

3579 :any:`complete` API. 

3580 

3581 With current provisional API, cursor_pos act both (depending on the 

3582 caller) as the offset in the ``text`` or ``line_buffer``, or as the 

3583 ``column`` when passing multiline strings this could/should be renamed 

3584 but would add extra noise. 

3585 

3586 Parameters 

3587 ---------- 

3588 cursor_line 

3589 Index of the line the cursor is on. 0 indexed. 

3590 cursor_pos 

3591 Position of the cursor in the current line/line_buffer/text. 0 

3592 indexed. 

3593 line_buffer : optional, str 

3594 The current line the cursor is in, this is mostly due to legacy 

3595 reason that readline could only give a us the single current line. 

3596 Prefer `full_text`. 

3597 text : str 

3598 The current "token" the cursor is in, mostly also for historical 

3599 reasons. as the completer would trigger only after the current line 

3600 was parsed. 

3601 full_text : str 

3602 Full text of the current cell. 

3603 

3604 Returns 

3605 ------- 

3606 An ordered dictionary where keys are identifiers of completion 

3607 matchers and values are ``MatcherResult``s. 

3608 """ 

3609 

3610 # if the cursor position isn't given, the only sane assumption we can 

3611 # make is that it's at the end of the line (the common case) 

3612 if cursor_pos is None: 

3613 cursor_pos = len(line_buffer) if text is None else len(text) 

3614 

3615 if self.use_main_ns: 

3616 self.namespace = __main__.__dict__ 

3617 

3618 # if text is either None or an empty string, rely on the line buffer 

3619 if (not line_buffer) and full_text: 

3620 line_buffer = full_text.split('\n')[cursor_line] 

3621 if not text: # issue #11508: check line_buffer before calling split_line 

3622 text = ( 

3623 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else "" 

3624 ) 

3625 

3626 # If no line buffer is given, assume the input text is all there was 

3627 if line_buffer is None: 

3628 line_buffer = text 

3629 

3630 # deprecated - do not use `line_buffer` in new code. 

3631 self.line_buffer = line_buffer 

3632 self.text_until_cursor = self.line_buffer[:cursor_pos] 

3633 

3634 if not full_text: 

3635 full_text = line_buffer 

3636 

3637 context = CompletionContext( 

3638 full_text=full_text, 

3639 cursor_position=cursor_pos, 

3640 cursor_line=cursor_line, 

3641 token=self._extract_code(text), 

3642 limit=MATCHES_LIMIT, 

3643 ) 

3644 

3645 # Start with a clean slate of completions 

3646 results: dict[str, MatcherResult] = {} 

3647 

3648 jedi_matcher_id = _get_matcher_id(self._jedi_matcher) 

3649 

3650 suppressed_matchers: set[str] = set() 

3651 

3652 matchers = { 

3653 _get_matcher_id(matcher): matcher 

3654 for matcher in sorted( 

3655 self.matchers, key=_get_matcher_priority, reverse=True 

3656 ) 

3657 } 

3658 

3659 for matcher_id, matcher in matchers.items(): 

3660 matcher_id = _get_matcher_id(matcher) 

3661 

3662 if matcher_id in self.disable_matchers: 

3663 continue 

3664 

3665 if matcher_id in results: 

3666 warnings.warn(f"Duplicate matcher ID: {matcher_id}.") 

3667 

3668 if matcher_id in suppressed_matchers: 

3669 continue 

3670 

3671 result: MatcherResult 

3672 try: 

3673 if _is_matcher_v1(matcher): 

3674 result = _convert_matcher_v1_result_to_v2_no_no( 

3675 matcher(text), type=_UNKNOWN_TYPE 

3676 ) 

3677 elif _is_matcher_v2(matcher): 

3678 result = matcher(context) 

3679 else: 

3680 api_version = _get_matcher_api_version(matcher) 

3681 raise ValueError(f"Unsupported API version {api_version}") 

3682 except BaseException: 

3683 # Show the ugly traceback if the matcher causes an 

3684 # exception, but do NOT crash the kernel! 

3685 sys.excepthook(*sys.exc_info()) 

3686 continue 

3687 

3688 # set default value for matched fragment if suffix was not selected. 

3689 result["matched_fragment"] = result.get("matched_fragment", context.token) 

3690 

3691 if not suppressed_matchers: 

3692 suppression_recommended: Union[bool, set[str]] = result.get( 

3693 "suppress", False 

3694 ) 

3695 

3696 suppression_config = ( 

3697 self.suppress_competing_matchers.get(matcher_id, None) 

3698 if isinstance(self.suppress_competing_matchers, dict) 

3699 else self.suppress_competing_matchers 

3700 ) 

3701 should_suppress = ( 

3702 (suppression_config is True) 

3703 or (suppression_recommended and (suppression_config is not False)) 

3704 ) and has_any_completions(result) 

3705 

3706 if should_suppress: 

3707 suppression_exceptions: set[str] = result.get( 

3708 "do_not_suppress", set() 

3709 ) 

3710 if isinstance(suppression_recommended, Iterable): 

3711 to_suppress = set(suppression_recommended) 

3712 else: 

3713 to_suppress = set(matchers) 

3714 suppressed_matchers = to_suppress - suppression_exceptions 

3715 

3716 new_results = {} 

3717 for previous_matcher_id, previous_result in results.items(): 

3718 if previous_matcher_id not in suppressed_matchers: 

3719 new_results[previous_matcher_id] = previous_result 

3720 results = new_results 

3721 

3722 results[matcher_id] = result 

3723 

3724 _, matches = self._arrange_and_extract( 

3725 results, 

3726 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission? 

3727 # if it was omission, we can remove the filtering step, otherwise remove this comment. 

3728 skip_matchers={jedi_matcher_id}, 

3729 abort_if_offset_changes=False, 

3730 ) 

3731 

3732 # populate legacy stateful API 

3733 self.matches = matches 

3734 

3735 return results 

3736 

3737 @staticmethod 

3738 def _deduplicate( 

3739 matches: Sequence[AnyCompletion], 

3740 ) -> Iterable[AnyCompletion]: 

3741 filtered_matches: dict[str, AnyCompletion] = {} 

3742 for match in matches: 

3743 text = match.text 

3744 if ( 

3745 text not in filtered_matches 

3746 or filtered_matches[text].type == _UNKNOWN_TYPE 

3747 ): 

3748 filtered_matches[text] = match 

3749 

3750 return filtered_matches.values() 

3751 

3752 @staticmethod 

3753 def _sort(matches: Sequence[AnyCompletion]): 

3754 return sorted(matches, key=lambda x: completions_sorting_key(x.text)) 

3755 

3756 @context_matcher() 

3757 def fwd_unicode_matcher(self, context: CompletionContext): 

3758 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API.""" 

3759 # TODO: use `context.limit` to terminate early once we matched the maximum 

3760 # number that will be used downstream; can be added as an optional to 

3761 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here. 

3762 fragment, matches = self.fwd_unicode_match(context.text_until_cursor) 

3763 return _convert_matcher_v1_result_to_v2( 

3764 matches, type="unicode", fragment=fragment, suppress_if_matches=True 

3765 ) 

3766 

3767 def fwd_unicode_match(self, text: str) -> tuple[str, Sequence[str]]: 

3768 """ 

3769 Forward match a string starting with a backslash with a list of 

3770 potential Unicode completions. 

3771 

3772 Will compute list of Unicode character names on first call and cache it. 

3773 

3774 .. deprecated:: 8.6 

3775 You can use :meth:`fwd_unicode_matcher` instead. 

3776 

3777 Returns 

3778 ------- 

3779 At tuple with: 

3780 - matched text (empty if no matches) 

3781 - list of potential completions, empty tuple otherwise) 

3782 """ 

3783 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements. 

3784 # We could do a faster match using a Trie. 

3785 

3786 # Using pygtrie the following seem to work: 

3787 

3788 # s = PrefixSet() 

3789 

3790 # for c in range(0,0x10FFFF + 1): 

3791 # try: 

3792 # s.add(unicodedata.name(chr(c))) 

3793 # except ValueError: 

3794 # pass 

3795 # [''.join(k) for k in s.iter(prefix)] 

3796 

3797 # But need to be timed and adds an extra dependency. 

3798 

3799 slashpos = text.rfind('\\') 

3800 # if text starts with slash 

3801 if slashpos > -1: 

3802 # PERF: It's important that we don't access self._unicode_names 

3803 # until we're inside this if-block. _unicode_names is lazily 

3804 # initialized, and it takes a user-noticeable amount of time to 

3805 # initialize it, so we don't want to initialize it unless we're 

3806 # actually going to use it. 

3807 s = text[slashpos + 1 :] 

3808 sup = s.upper() 

3809 candidates = [x for x in self.unicode_names if x.startswith(sup)] 

3810 if candidates: 

3811 return s, candidates 

3812 candidates = [x for x in self.unicode_names if sup in x] 

3813 if candidates: 

3814 return s, candidates 

3815 splitsup = sup.split(" ") 

3816 candidates = [ 

3817 x for x in self.unicode_names if all(u in x for u in splitsup) 

3818 ] 

3819 if candidates: 

3820 return s, candidates 

3821 

3822 return "", () 

3823 

3824 # if text does not start with slash 

3825 else: 

3826 return '', () 

3827 

3828 @property 

3829 def unicode_names(self) -> list[str]: 

3830 """List of names of unicode code points that can be completed. 

3831 

3832 The list is lazily initialized on first access. 

3833 """ 

3834 if self._unicode_names is None: 

3835 names = [] 

3836 for c in range(0,0x10FFFF + 1): 

3837 try: 

3838 names.append(unicodedata.name(chr(c))) 

3839 except ValueError: 

3840 pass 

3841 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES) 

3842 

3843 return self._unicode_names 

3844 

3845 

3846def _unicode_name_compute(ranges: list[tuple[int, int]]) -> list[str]: 

3847 names = [] 

3848 for start,stop in ranges: 

3849 for c in range(start, stop) : 

3850 try: 

3851 names.append(unicodedata.name(chr(c))) 

3852 except ValueError: 

3853 pass 

3854 return names