1"""Completion for IPython.
2
3This module started as fork of the rlcompleter module in the Python standard
4library. The original enhancements made to rlcompleter have been sent
5upstream and were accepted as of Python 2.3,
6
7This module now support a wide variety of completion mechanism both available
8for normal classic Python code, as well as completer for IPython specific
9Syntax like magics.
10
11Latex and Unicode completion
12============================
13
14IPython and compatible frontends not only can complete your code, but can help
15you to input a wide range of characters. In particular we allow you to insert
16a unicode character using the tab completion mechanism.
17
18Forward latex/unicode completion
19--------------------------------
20
21Forward completion allows you to easily type a unicode character using its latex
22name, or unicode long description. To do so type a backslash follow by the
23relevant name and press tab:
24
25
26Using latex completion:
27
28.. code::
29
30 \\alpha<tab>
31 α
32
33or using unicode completion:
34
35
36.. code::
37
38 \\GREEK SMALL LETTER ALPHA<tab>
39 α
40
41
42Only valid Python identifiers will complete. Combining characters (like arrow or
43dots) are also available, unlike latex they need to be put after the their
44counterpart that is to say, ``F\\\\vec<tab>`` is correct, not ``\\\\vec<tab>F``.
45
46Some browsers are known to display combining characters incorrectly.
47
48Backward latex completion
49-------------------------
50
51It is sometime challenging to know how to type a character, if you are using
52IPython, or any compatible frontend you can prepend backslash to the character
53and press :kbd:`Tab` to expand it to its latex form.
54
55.. code::
56
57 \\α<tab>
58 \\alpha
59
60
61Both forward and backward completions can be deactivated by setting the
62:std:configtrait:`Completer.backslash_combining_completions` option to
63``False``.
64
65
66Experimental
67============
68
69Starting with IPython 6.0, this module can make use of the Jedi library to
70generate completions both using static analysis of the code, and dynamically
71inspecting multiple namespaces. Jedi is an autocompletion and static analysis
72for Python. The APIs attached to this new mechanism is unstable and will
73raise unless use in an :any:`provisionalcompleter` context manager.
74
75You will find that the following are experimental:
76
77 - :any:`provisionalcompleter`
78 - :any:`IPCompleter.completions`
79 - :any:`Completion`
80 - :any:`rectify_completions`
81
82.. note::
83
84 better name for :any:`rectify_completions` ?
85
86We welcome any feedback on these new API, and we also encourage you to try this
87module in debug mode (start IPython with ``--Completer.debug=True``) in order
88to have extra logging information if :any:`jedi` is crashing, or if current
89IPython completer pending deprecations are returning results not yet handled
90by :any:`jedi`
91
92Using Jedi for tab completion allow snippets like the following to work without
93having to execute any code:
94
95 >>> myvar = ['hello', 42]
96 ... myvar[1].bi<tab>
97
98Tab completion will be able to infer that ``myvar[1]`` is a real number without
99executing almost any code unlike the deprecated :any:`IPCompleter.greedy`
100option.
101
102Be sure to update :any:`jedi` to the latest stable version or to try the
103current development version to get better completions.
104
105Matchers
106========
107
108All completions routines are implemented using unified *Matchers* API.
109The matchers API is provisional and subject to change without notice.
110
111The built-in matchers include:
112
113- :any:`IPCompleter.dict_key_matcher`: dictionary key completions,
114- :any:`IPCompleter.magic_matcher`: completions for magics,
115- :any:`IPCompleter.unicode_name_matcher`,
116 :any:`IPCompleter.fwd_unicode_matcher`
117 and :any:`IPCompleter.latex_name_matcher`: see `Forward latex/unicode completion`_,
118- :any:`back_unicode_name_matcher` and :any:`back_latex_name_matcher`: see `Backward latex completion`_,
119- :any:`IPCompleter.file_matcher`: paths to files and directories,
120- :any:`IPCompleter.python_func_kw_matcher` - function keywords,
121- :any:`IPCompleter.python_matches` - globals and attributes (v1 API),
122- ``IPCompleter.jedi_matcher`` - static analysis with Jedi,
123- :any:`IPCompleter.custom_completer_matcher` - pluggable completer with a default
124 implementation in :any:`InteractiveShell` which uses IPython hooks system
125 (`complete_command`) with string dispatch (including regular expressions).
126 Differently to other matchers, ``custom_completer_matcher`` will not suppress
127 Jedi results to match behaviour in earlier IPython versions.
128
129Custom matchers can be added by appending to ``IPCompleter.custom_matchers`` list.
130
131Matcher API
132-----------
133
134Simplifying some details, the ``Matcher`` interface can described as
135
136.. code-block::
137
138 MatcherAPIv1 = Callable[[str], list[str]]
139 MatcherAPIv2 = Callable[[CompletionContext], SimpleMatcherResult]
140
141 Matcher = MatcherAPIv1 | MatcherAPIv2
142
143The ``MatcherAPIv1`` reflects the matcher API as available prior to IPython 8.6.0
144and remains supported as a simplest way for generating completions. This is also
145currently the only API supported by the IPython hooks system `complete_command`.
146
147To distinguish between matcher versions ``matcher_api_version`` attribute is used.
148More precisely, the API allows to omit ``matcher_api_version`` for v1 Matchers,
149and requires a literal ``2`` for v2 Matchers.
150
151Once the API stabilises future versions may relax the requirement for specifying
152``matcher_api_version`` by switching to :any:`functools.singledispatch`, therefore
153please do not rely on the presence of ``matcher_api_version`` for any purposes.
154
155Suppression of competing matchers
156---------------------------------
157
158By default results from all matchers are combined, in the order determined by
159their priority. Matchers can request to suppress results from subsequent
160matchers by setting ``suppress`` to ``True`` in the ``MatcherResult``.
161
162When multiple matchers simultaneously request suppression, the results from of
163the matcher with higher priority will be returned.
164
165Sometimes it is desirable to suppress most but not all other matchers;
166this can be achieved by adding a set of identifiers of matchers which
167should not be suppressed to ``MatcherResult`` under ``do_not_suppress`` key.
168
169The suppression behaviour can is user-configurable via
170:std:configtrait:`IPCompleter.suppress_competing_matchers`.
171"""
172
173
174# Copyright (c) IPython Development Team.
175# Distributed under the terms of the Modified BSD License.
176#
177# Some of this code originated from rlcompleter in the Python standard library
178# Copyright (C) 2001 Python Software Foundation, www.python.org
179
180from __future__ import annotations
181import builtins as builtin_mod
182import enum
183import glob
184import inspect
185import itertools
186import keyword
187import ast
188import os
189import re
190import string
191import sys
192import tokenize
193import time
194import unicodedata
195import uuid
196import warnings
197from ast import literal_eval
198from collections import defaultdict
199from contextlib import contextmanager
200from dataclasses import dataclass
201from functools import cached_property, partial
202from types import SimpleNamespace
203from typing import (
204 Iterable,
205 Iterator,
206 Union,
207 Any,
208 Sequence,
209 Optional,
210 TYPE_CHECKING,
211 Sized,
212 TypeVar,
213 Literal,
214)
215
216from IPython.core.guarded_eval import (
217 guarded_eval,
218 EvaluationContext,
219 _validate_policy_overrides,
220)
221from IPython.core.error import TryNext, UsageError
222from IPython.core.inputtransformer2 import ESC_MAGIC
223from IPython.core.latex_symbols import latex_symbols, reverse_latex_symbol
224from IPython.testing.skipdoctest import skip_doctest
225from IPython.utils import generics
226from IPython.utils.PyColorize import theme_table
227from IPython.utils.decorators import sphinx_options
228from IPython.utils.dir2 import dir2, get_real_method
229from IPython.utils.path import ensure_dir_exists
230from IPython.utils.process import arg_split
231from traitlets import (
232 Bool,
233 Enum,
234 Int,
235 List as ListTrait,
236 Unicode,
237 Dict as DictTrait,
238 DottedObjectName,
239 Union as UnionTrait,
240 observe,
241)
242from traitlets.config.configurable import Configurable
243from traitlets.utils.importstring import import_item
244
245import __main__
246
247from typing import cast
248
249if sys.version_info < (3, 12):
250 from typing_extensions import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
251else:
252 from typing import TypedDict, NotRequired, Protocol, TypeAlias, TypeGuard
253
254
255# skip module docstests
256__skip_doctest__ = True
257
258
259try:
260 import jedi
261 jedi.settings.case_insensitive_completion = False
262 import jedi.api.helpers
263 import jedi.api.classes
264 JEDI_INSTALLED = True
265except ImportError:
266 JEDI_INSTALLED = False
267
268
269# -----------------------------------------------------------------------------
270# Globals
271#-----------------------------------------------------------------------------
272
273# ranges where we have most of the valid unicode names. We could be more finer
274# grained but is it worth it for performance While unicode have character in the
275# range 0, 0x110000, we seem to have name for about 10% of those. (131808 as I
276# write this). With below range we cover them all, with a density of ~67%
277# biggest next gap we consider only adds up about 1% density and there are 600
278# gaps that would need hard coding.
279_UNICODE_RANGES = [(32, 0x323B0), (0xE0001, 0xE01F0)]
280
281# Public API
282__all__ = ["Completer", "IPCompleter"]
283
284if sys.platform == 'win32':
285 PROTECTABLES = ' '
286else:
287 PROTECTABLES = ' ()[]{}?=\\|;:\'#*"^&'
288
289# Protect against returning an enormous number of completions which the frontend
290# may have trouble processing.
291MATCHES_LIMIT = 500
292
293# Completion type reported when no type can be inferred.
294_UNKNOWN_TYPE = "<unknown>"
295
296# sentinel value to signal lack of a match
297not_found = object()
298
299class ProvisionalCompleterWarning(FutureWarning):
300 """
301 Exception raise by an experimental feature in this module.
302
303 Wrap code in :any:`provisionalcompleter` context manager if you
304 are certain you want to use an unstable feature.
305 """
306 pass
307
308warnings.filterwarnings('error', category=ProvisionalCompleterWarning)
309
310
311@skip_doctest
312@contextmanager
313def provisionalcompleter(action='ignore'):
314 """
315 This context manager has to be used in any place where unstable completer
316 behavior and API may be called.
317
318 >>> with provisionalcompleter():
319 ... completer.do_experimental_things() # works
320
321 >>> completer.do_experimental_things() # raises.
322
323 .. note::
324
325 Unstable
326
327 By using this context manager you agree that the API in use may change
328 without warning, and that you won't complain if they do so.
329
330 You also understand that, if the API is not to your liking, you should report
331 a bug to explain your use case upstream.
332
333 We'll be happy to get your feedback, feature requests, and improvements on
334 any of the unstable APIs!
335 """
336 with warnings.catch_warnings():
337 warnings.filterwarnings(action, category=ProvisionalCompleterWarning)
338 yield
339
340
341def has_open_quotes(s: str) -> Union[str, bool]:
342 """Return whether a string has open quotes.
343
344 This simply counts whether the number of quote characters of either type in
345 the string is odd.
346
347 Returns
348 -------
349 If there is an open quote, the quote character is returned. Else, return
350 False.
351 """
352 # We check " first, then ', so complex cases with nested quotes will get
353 # the " to take precedence.
354 if s.count('"') % 2:
355 return '"'
356 elif s.count("'") % 2:
357 return "'"
358 else:
359 return False
360
361
362def protect_filename(s: str, protectables: str = PROTECTABLES) -> str:
363 """Escape a string to protect certain characters."""
364 if set(s) & set(protectables):
365 if sys.platform == "win32":
366 return '"' + s + '"'
367 else:
368 return "".join(("\\" + c if c in protectables else c) for c in s)
369 else:
370 return s
371
372
373def expand_user(path: str) -> tuple[str, bool, str]:
374 """Expand ``~``-style usernames in strings.
375
376 This is similar to :func:`os.path.expanduser`, but it computes and returns
377 extra information that will be useful if the input was being used in
378 computing completions, and you wish to return the completions with the
379 original '~' instead of its expanded value.
380
381 Parameters
382 ----------
383 path : str
384 String to be expanded. If no ~ is present, the output is the same as the
385 input.
386
387 Returns
388 -------
389 newpath : str
390 Result of ~ expansion in the input path.
391 tilde_expand : bool
392 Whether any expansion was performed or not.
393 tilde_val : str
394 The value that ~ was replaced with.
395 """
396 # Default values
397 tilde_expand = False
398 tilde_val = ''
399 newpath = path
400
401 if path.startswith('~'):
402 tilde_expand = True
403 rest = len(path)-1
404 newpath = os.path.expanduser(path)
405 if rest:
406 tilde_val = newpath[:-rest]
407 else:
408 tilde_val = newpath
409
410 return newpath, tilde_expand, tilde_val
411
412
413def compress_user(path:str, tilde_expand:bool, tilde_val:str) -> str:
414 """Does the opposite of expand_user, with its outputs.
415 """
416 if tilde_expand:
417 return path.replace(tilde_val, '~')
418 else:
419 return path
420
421
422def completions_sorting_key(word):
423 """key for sorting completions
424
425 This does several things:
426
427 - Demote any completions starting with underscores to the end
428 - Insert any %magic and %%cellmagic completions in the alphabetical order
429 by their name
430 """
431 prio1, prio2 = 0, 0
432
433 if word.startswith('__'):
434 prio1 = 2
435 elif word.startswith('_'):
436 prio1 = 1
437
438 if word.endswith('='):
439 prio1 = -1
440
441 if word.startswith('%%'):
442 # If there's another % in there, this is something else, so leave it alone
443 if "%" not in word[2:]:
444 word = word[2:]
445 prio2 = 2
446 elif word.startswith('%'):
447 if "%" not in word[1:]:
448 word = word[1:]
449 prio2 = 1
450
451 return prio1, word, prio2
452
453
454class _FakeJediCompletion:
455 """
456 This is a workaround to communicate to the UI that Jedi has crashed and to
457 report a bug. Will be used only id :any:`IPCompleter.debug` is set to true.
458
459 Added in IPython 6.0 so should likely be removed for 7.0
460
461 """
462
463 def __init__(self, name):
464
465 self.name = name
466 self.complete = name
467 self.type = 'crashed'
468 self.name_with_symbols = name
469 self.signature = ""
470 self._origin = "fake"
471 self.text = "crashed"
472
473 def __repr__(self):
474 return '<Fake completion object jedi has crashed>'
475
476
477_JediCompletionLike = Union["jedi.api.Completion", _FakeJediCompletion]
478
479
480class Completion:
481 """
482 Completion object used and returned by IPython completers.
483
484 .. warning::
485
486 Unstable
487
488 This function is unstable, API may change without warning.
489 It will also raise unless use in proper context manager.
490
491 This act as a middle ground :any:`Completion` object between the
492 :any:`jedi.api.classes.Completion` object and the Prompt Toolkit completion
493 object. While Jedi need a lot of information about evaluator and how the
494 code should be ran/inspected, PromptToolkit (and other frontend) mostly
495 need user facing information.
496
497 - Which range should be replaced replaced by what.
498 - Some metadata (like completion type), or meta information to displayed to
499 the use user.
500
501 For debugging purpose we can also store the origin of the completion (``jedi``,
502 ``IPython.python_matches``, ``IPython.magics_matches``...).
503 """
504
505 __slots__ = ['start', 'end', 'text', 'type', 'signature', '_origin']
506
507 def __init__(
508 self,
509 start: int,
510 end: int,
511 text: str,
512 *,
513 type: Optional[str] = None,
514 _origin="",
515 signature="",
516 ) -> None:
517 warnings.warn(
518 "``Completion`` is a provisional API (as of IPython 6.0). "
519 "It may change without warnings. "
520 "Use in corresponding context manager.",
521 category=ProvisionalCompleterWarning,
522 stacklevel=2,
523 )
524
525 self.start = start
526 self.end = end
527 self.text = text
528 self.type = type
529 self.signature = signature
530 self._origin = _origin
531
532 def __repr__(self):
533 return '<Completion start=%s end=%s text=%r type=%r, signature=%r,>' % \
534 (self.start, self.end, self.text, self.type or '?', self.signature or '?')
535
536 def __eq__(self, other) -> bool:
537 """
538 Equality and hash do not hash the type (as some completer may not be
539 able to infer the type), but are use to (partially) de-duplicate
540 completion.
541
542 Completely de-duplicating completion is a bit tricker that just
543 comparing as it depends on surrounding text, which Completions are not
544 aware of.
545 """
546 return self.start == other.start and \
547 self.end == other.end and \
548 self.text == other.text
549
550 def __hash__(self):
551 return hash((self.start, self.end, self.text))
552
553
554class SimpleCompletion:
555 """Completion item to be included in the dictionary returned by new-style Matcher (API v2).
556
557 .. warning::
558
559 Provisional
560
561 This class is used to describe the currently supported attributes of
562 simple completion items, and any additional implementation details
563 should not be relied on. Additional attributes may be included in
564 future versions, and meaning of text disambiguated from the current
565 dual meaning of "text to insert" and "text to used as a label".
566 """
567
568 __slots__ = ["text", "type"]
569
570 def __init__(self, text: str, *, type: Optional[str] = None):
571 self.text = text
572 self.type = type
573
574 def __repr__(self):
575 return f"<SimpleCompletion text={self.text!r} type={self.type!r}>"
576
577
578class _MatcherResultBase(TypedDict):
579 """Definition of dictionary to be returned by new-style Matcher (API v2)."""
580
581 #: Suffix of the provided ``CompletionContext.token``, if not given defaults to full token.
582 matched_fragment: NotRequired[str]
583
584 #: Whether to suppress results from all other matchers (True), some
585 #: matchers (set of identifiers) or none (False); default is False.
586 suppress: NotRequired[Union[bool, set[str]]]
587
588 #: Identifiers of matchers which should NOT be suppressed when this matcher
589 #: requests to suppress all other matchers; defaults to an empty set.
590 do_not_suppress: NotRequired[set[str]]
591
592 #: Are completions already ordered and should be left as-is? default is False.
593 ordered: NotRequired[bool]
594
595
596@sphinx_options(show_inherited_members=True, exclude_inherited_from=["dict"])
597class SimpleMatcherResult(_MatcherResultBase, TypedDict):
598 """Result of new-style completion matcher."""
599
600 # note: TypedDict is added again to the inheritance chain
601 # in order to get __orig_bases__ for documentation
602
603 #: List of candidate completions
604 completions: Sequence[SimpleCompletion] | Iterator[SimpleCompletion]
605
606
607class _JediMatcherResult(_MatcherResultBase):
608 """Matching result returned by Jedi (will be processed differently)"""
609
610 #: list of candidate completions
611 completions: Iterator[_JediCompletionLike]
612
613
614AnyMatcherCompletion = Union[_JediCompletionLike, SimpleCompletion]
615AnyCompletion = TypeVar("AnyCompletion", AnyMatcherCompletion, Completion)
616
617
618@dataclass
619class CompletionContext:
620 """Completion context provided as an argument to matchers in the Matcher API v2."""
621
622 # rationale: many legacy matchers relied on completer state (`self.text_until_cursor`)
623 # which was not explicitly visible as an argument of the matcher, making any refactor
624 # prone to errors; by explicitly passing `cursor_position` we can decouple the matchers
625 # from the completer, and make substituting them in sub-classes easier.
626
627 #: Relevant fragment of code directly preceding the cursor.
628 #: The extraction of token is implemented via splitter heuristic
629 #: (following readline behaviour for legacy reasons), which is user configurable
630 #: (by switching the greedy mode).
631 token: str
632
633 #: The full available content of the editor or buffer
634 full_text: str
635
636 #: Cursor position in the line (the same for ``full_text`` and ``text``).
637 cursor_position: int
638
639 #: Cursor line in ``full_text``.
640 cursor_line: int
641
642 #: The maximum number of completions that will be used downstream.
643 #: Matchers can use this information to abort early.
644 #: The built-in Jedi matcher is currently excepted from this limit.
645 # If not given, return all possible completions.
646 limit: Optional[int]
647
648 @cached_property
649 def text_until_cursor(self) -> str:
650 return self.line_with_cursor[: self.cursor_position]
651
652 @cached_property
653 def line_with_cursor(self) -> str:
654 return self.full_text.split("\n")[self.cursor_line]
655
656
657#: Matcher results for API v2.
658MatcherResult = Union[SimpleMatcherResult, _JediMatcherResult]
659
660
661class _MatcherAPIv1Base(Protocol):
662 def __call__(self, text: str) -> list[str]:
663 """Call signature."""
664 ...
665
666 #: Used to construct the default matcher identifier
667 __qualname__: str
668
669
670class _MatcherAPIv1Total(_MatcherAPIv1Base, Protocol):
671 #: API version
672 matcher_api_version: Optional[Literal[1]]
673
674 def __call__(self, text: str) -> list[str]:
675 """Call signature."""
676 ...
677
678
679#: Protocol describing Matcher API v1.
680MatcherAPIv1: TypeAlias = Union[_MatcherAPIv1Base, _MatcherAPIv1Total]
681
682
683class MatcherAPIv2(Protocol):
684 """Protocol describing Matcher API v2."""
685
686 #: API version
687 matcher_api_version: Literal[2] = 2
688
689 def __call__(self, context: CompletionContext) -> MatcherResult:
690 """Call signature."""
691 ...
692
693 #: Used to construct the default matcher identifier
694 __qualname__: str
695
696
697Matcher: TypeAlias = Union[MatcherAPIv1, MatcherAPIv2]
698
699
700def _is_matcher_v1(matcher: Matcher) -> TypeGuard[MatcherAPIv1]:
701 api_version = _get_matcher_api_version(matcher)
702 return api_version == 1
703
704
705def _is_matcher_v2(matcher: Matcher) -> TypeGuard[MatcherAPIv2]:
706 api_version = _get_matcher_api_version(matcher)
707 return api_version == 2
708
709
710def _is_sizable(value: Any) -> TypeGuard[Sized]:
711 """Determines whether objects is sizable"""
712 return hasattr(value, "__len__")
713
714
715def _is_iterator(value: Any) -> TypeGuard[Iterator]:
716 """Determines whether objects is sizable"""
717 return hasattr(value, "__next__")
718
719
720def has_any_completions(result: MatcherResult) -> bool:
721 """Check if any result includes any completions."""
722 completions = result["completions"]
723 if _is_sizable(completions):
724 return len(completions) != 0
725 if _is_iterator(completions):
726 try:
727 old_iterator = completions
728 first = next(old_iterator)
729 result["completions"] = cast(
730 Iterator[SimpleCompletion],
731 itertools.chain([first], old_iterator),
732 )
733 return True
734 except StopIteration:
735 return False
736 raise ValueError(
737 "Completions returned by matcher need to be an Iterator or a Sizable"
738 )
739
740
741def completion_matcher(
742 *,
743 priority: Optional[float] = None,
744 identifier: Optional[str] = None,
745 api_version: int = 1,
746) -> Callable[[Matcher], Matcher]:
747 """Adds attributes describing the matcher.
748
749 Parameters
750 ----------
751 priority : Optional[float]
752 The priority of the matcher, determines the order of execution of matchers.
753 Higher priority means that the matcher will be executed first. Defaults to 0.
754 identifier : Optional[str]
755 identifier of the matcher allowing users to modify the behaviour via traitlets,
756 and also used to for debugging (will be passed as ``origin`` with the completions).
757
758 Defaults to matcher function's ``__qualname__`` (for example,
759 ``IPCompleter.file_matcher`` for the built-in matched defined
760 as a ``file_matcher`` method of the ``IPCompleter`` class).
761 api_version: Optional[int]
762 version of the Matcher API used by this matcher.
763 Currently supported values are 1 and 2.
764 Defaults to 1.
765 """
766
767 def wrapper(func: Matcher):
768 func.matcher_priority = priority or 0 # type: ignore
769 func.matcher_identifier = identifier or func.__qualname__ # type: ignore
770 func.matcher_api_version = api_version # type: ignore
771 if TYPE_CHECKING:
772 if api_version == 1:
773 func = cast(MatcherAPIv1, func)
774 elif api_version == 2:
775 func = cast(MatcherAPIv2, func)
776 return func
777
778 return wrapper
779
780
781def _get_matcher_priority(matcher: Matcher):
782 return getattr(matcher, "matcher_priority", 0)
783
784
785def _get_matcher_id(matcher: Matcher):
786 return getattr(matcher, "matcher_identifier", matcher.__qualname__)
787
788
789def _get_matcher_api_version(matcher):
790 return getattr(matcher, "matcher_api_version", 1)
791
792
793context_matcher = partial(completion_matcher, api_version=2)
794
795
796_IC = Iterable[Completion]
797
798
799def _deduplicate_completions(text: str, completions: _IC)-> _IC:
800 """
801 Deduplicate a set of completions.
802
803 .. warning::
804
805 Unstable
806
807 This function is unstable, API may change without warning.
808
809 Parameters
810 ----------
811 text : str
812 text that should be completed.
813 completions : Iterator[Completion]
814 iterator over the completions to deduplicate
815
816 Yields
817 ------
818 `Completions` objects
819 Completions coming from multiple sources, may be different but end up having
820 the same effect when applied to ``text``. If this is the case, this will
821 consider completions as equal and only emit the first encountered.
822 Not folded in `completions()` yet for debugging purpose, and to detect when
823 the IPython completer does return things that Jedi does not, but should be
824 at some point.
825 """
826 completions = list(completions)
827 if not completions:
828 return
829
830 new_start = min(c.start for c in completions)
831 new_end = max(c.end for c in completions)
832
833 seen = set()
834 for c in completions:
835 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
836 if new_text not in seen:
837 yield c
838 seen.add(new_text)
839
840
841def rectify_completions(text: str, completions: _IC, *, _debug: bool = False) -> _IC:
842 """
843 Rectify a set of completions to all have the same ``start`` and ``end``
844
845 .. warning::
846
847 Unstable
848
849 This function is unstable, API may change without warning.
850 It will also raise unless use in proper context manager.
851
852 Parameters
853 ----------
854 text : str
855 text that should be completed.
856 completions : Iterator[Completion]
857 iterator over the completions to rectify
858 _debug : bool
859 Log failed completion
860
861 Notes
862 -----
863 :any:`jedi.api.classes.Completion` s returned by Jedi may not have the same start and end, though
864 the Jupyter Protocol requires them to behave like so. This will readjust
865 the completion to have the same ``start`` and ``end`` by padding both
866 extremities with surrounding text.
867
868 During stabilisation should support a ``_debug`` option to log which
869 completion are return by the IPython completer and not found in Jedi in
870 order to make upstream bug report.
871 """
872 warnings.warn("`rectify_completions` is a provisional API (as of IPython 6.0). "
873 "It may change without warnings. "
874 "Use in corresponding context manager.",
875 category=ProvisionalCompleterWarning, stacklevel=2)
876
877 completions = list(completions)
878 if not completions:
879 return
880 starts = (c.start for c in completions)
881 ends = (c.end for c in completions)
882
883 new_start = min(starts)
884 new_end = max(ends)
885
886 seen_jedi = set()
887 seen_python_matches = set()
888 for c in completions:
889 new_text = text[new_start:c.start] + c.text + text[c.end:new_end]
890 if c._origin == 'jedi':
891 seen_jedi.add(new_text)
892 elif c._origin == "IPCompleter.python_matcher":
893 seen_python_matches.add(new_text)
894 yield Completion(new_start, new_end, new_text, type=c.type, _origin=c._origin, signature=c.signature)
895 diff = seen_python_matches.difference(seen_jedi)
896 if diff and _debug:
897 print('IPython.python matches have extras:', diff)
898
899
900if sys.platform == 'win32':
901 DELIMS = ' \t\n`!@#$^&*()=+[{]}|;\'",<>?'
902else:
903 DELIMS = ' \t\n`!@#$^&*()=+[{]}\\|;:\'",<>?'
904
905GREEDY_DELIMS = ' =\r\n'
906
907
908class CompletionSplitter:
909 """An object to split an input line in a manner similar to readline.
910
911 By having our own implementation, we can expose readline-like completion in
912 a uniform manner to all frontends. This object only needs to be given the
913 line of text to be split and the cursor position on said line, and it
914 returns the 'word' to be completed on at the cursor after splitting the
915 entire line.
916
917 What characters are used as splitting delimiters can be controlled by
918 setting the ``delims`` attribute (this is a property that internally
919 automatically builds the necessary regular expression)"""
920
921 # Private interface
922
923 # A string of delimiter characters. The default value makes sense for
924 # IPython's most typical usage patterns.
925 _delims = DELIMS
926
927 # The expression (a normal string) to be compiled into a regular expression
928 # for actual splitting. We store it as an attribute mostly for ease of
929 # debugging, since this type of code can be so tricky to debug.
930 _delim_expr = None
931
932 # The regular expression that does the actual splitting
933 _delim_re = None
934
935 def __init__(self, delims=None):
936 delims = CompletionSplitter._delims if delims is None else delims
937 self.delims = delims
938
939 @property
940 def delims(self):
941 """Return the string of delimiter characters."""
942 return self._delims
943
944 @delims.setter
945 def delims(self, delims):
946 """Set the delimiters for line splitting."""
947 expr = '[' + ''.join('\\'+ c for c in delims) + ']'
948 self._delim_re = re.compile(expr)
949 self._delims = delims
950 self._delim_expr = expr
951
952 def split_line(self, line, cursor_pos=None):
953 """Split a line of text with a cursor at the given position.
954 """
955 cut_line = line if cursor_pos is None else line[:cursor_pos]
956 return self._delim_re.split(cut_line)[-1]
957
958
959class Completer(Configurable):
960
961 greedy = Bool(
962 False,
963 help="""Activate greedy completion.
964
965 .. deprecated:: 8.8
966 Use :std:configtrait:`Completer.evaluation` and :std:configtrait:`Completer.auto_close_dict_keys` instead.
967
968 When enabled in IPython 8.8 or newer, changes configuration as follows:
969
970 - ``Completer.evaluation = 'unsafe'``
971 - ``Completer.auto_close_dict_keys = True``
972 """,
973 ).tag(config=True)
974
975 evaluation = Enum(
976 ("forbidden", "minimal", "limited", "unsafe", "dangerous"),
977 default_value="limited",
978 help="""Policy for code evaluation under completion.
979
980 Successive options allow to enable more eager evaluation for better
981 completion suggestions, including for nested dictionaries, nested lists,
982 or even results of function calls.
983 Setting ``unsafe`` or higher can lead to evaluation of arbitrary user
984 code on :kbd:`Tab` with potentially unwanted or dangerous side effects.
985
986 Allowed values are:
987
988 - ``forbidden``: no evaluation of code is permitted,
989 - ``minimal``: evaluation of literals and access to built-in namespace;
990 no item/attribute evaluation, no access to locals/globals,
991 no evaluation of any operations or comparisons.
992 - ``limited``: access to all namespaces, evaluation of hard-coded methods
993 (for example: :any:`dict.keys`, :any:`object.__getattr__`,
994 :any:`object.__getitem__`) on allow-listed objects (for example:
995 :any:`dict`, :any:`list`, :any:`tuple`, ``pandas.Series``),
996 - ``unsafe``: evaluation of all methods and function calls but not of
997 syntax with side-effects like `del x`,
998 - ``dangerous``: completely arbitrary evaluation; does not support auto-import.
999
1000 To override specific elements of the policy, you can use ``policy_overrides`` trait.
1001 """,
1002 ).tag(config=True)
1003
1004 use_jedi = Bool(default_value=JEDI_INSTALLED,
1005 help="Experimental: Use Jedi to generate autocompletions. "
1006 "Default to True if jedi is installed.").tag(config=True)
1007
1008 jedi_compute_type_timeout = Int(default_value=400,
1009 help="""Experimental: restrict time (in milliseconds) during which Jedi can compute types.
1010 Set to 0 to stop computing types. Non-zero value lower than 100ms may hurt
1011 performance by preventing jedi to build its cache.
1012 """).tag(config=True)
1013
1014 debug = Bool(default_value=False,
1015 help='Enable debug for the Completer. Mostly print extra '
1016 'information for experimental jedi integration.')\
1017 .tag(config=True)
1018
1019 backslash_combining_completions = Bool(True,
1020 help="Enable unicode completions, e.g. \\alpha<tab> . "
1021 "Includes completion of latex commands, unicode names, and expanding "
1022 "unicode characters back to latex commands.").tag(config=True)
1023
1024 auto_close_dict_keys = Bool(
1025 False,
1026 help="""
1027 Enable auto-closing dictionary keys.
1028
1029 When enabled string keys will be suffixed with a final quote
1030 (matching the opening quote), tuple keys will also receive a
1031 separating comma if needed, and keys which are final will
1032 receive a closing bracket (``]``).
1033 """,
1034 ).tag(config=True)
1035
1036 policy_overrides = DictTrait(
1037 default_value={},
1038 key_trait=Unicode(),
1039 help="""Overrides for policy evaluation.
1040
1041 For example, to enable auto-import on completion specify:
1042
1043 .. code-block::
1044
1045 ipython --Completer.policy_overrides='{"allow_auto_import": True}' --Completer.use_jedi=False
1046
1047 """,
1048 ).tag(config=True)
1049
1050 @observe("evaluation")
1051 def _evaluation_changed(self, _change):
1052 _validate_policy_overrides(
1053 policy_name=self.evaluation, policy_overrides=self.policy_overrides
1054 )
1055
1056 @observe("policy_overrides")
1057 def _policy_overrides_changed(self, _change):
1058 _validate_policy_overrides(
1059 policy_name=self.evaluation, policy_overrides=self.policy_overrides
1060 )
1061
1062 auto_import_method = DottedObjectName(
1063 default_value="importlib.import_module",
1064 allow_none=True,
1065 help="""\
1066 Provisional:
1067 This is a provisional API in IPython 9.3, it may change without warnings.
1068
1069 A fully qualified path to an auto-import method for use by completer.
1070 The function should take a single string and return `ModuleType` and
1071 can raise `ImportError` exception if module is not found.
1072
1073 The default auto-import implementation does not populate the user namespace with the imported module.
1074 """,
1075 ).tag(config=True)
1076
1077 def __init__(self, namespace=None, global_namespace=None, **kwargs):
1078 """Create a new completer for the command line.
1079
1080 Completer(namespace=ns, global_namespace=ns2) -> completer instance.
1081
1082 If unspecified, the default namespace where completions are performed
1083 is __main__ (technically, __main__.__dict__). Namespaces should be
1084 given as dictionaries.
1085
1086 An optional second namespace can be given. This allows the completer
1087 to handle cases where both the local and global scopes need to be
1088 distinguished.
1089 """
1090
1091 # Don't bind to namespace quite yet, but flag whether the user wants a
1092 # specific namespace or to use __main__.__dict__. This will allow us
1093 # to bind to __main__.__dict__ at completion time, not now.
1094 if namespace is None:
1095 self.use_main_ns = True
1096 else:
1097 self.use_main_ns = False
1098 self.namespace = namespace
1099
1100 # The global namespace, if given, can be bound directly
1101 if global_namespace is None:
1102 self.global_namespace = {}
1103 else:
1104 self.global_namespace = global_namespace
1105
1106 self.custom_matchers = []
1107
1108 super(Completer, self).__init__(**kwargs)
1109
1110 def complete(self, text, state):
1111 """Return the next possible completion for 'text'.
1112
1113 This is called successively with state == 0, 1, 2, ... until it
1114 returns None. The completion should begin with 'text'.
1115
1116 """
1117 if self.use_main_ns:
1118 self.namespace = __main__.__dict__
1119
1120 if state == 0:
1121 if "." in text:
1122 self.matches = self.attr_matches(text)
1123 else:
1124 self.matches = self.global_matches(text)
1125 try:
1126 return self.matches[state]
1127 except IndexError:
1128 return None
1129
1130 def global_matches(self, text: str, context: Optional[CompletionContext] = None):
1131 """Compute matches when text is a simple name.
1132
1133 Return a list of all keywords, built-in functions and names currently
1134 defined in self.namespace or self.global_namespace that match.
1135
1136 """
1137 matches = []
1138 match_append = matches.append
1139 n = len(text)
1140
1141 search_lists = [
1142 keyword.kwlist,
1143 builtin_mod.__dict__.keys(),
1144 list(self.namespace.keys()),
1145 list(self.global_namespace.keys()),
1146 ]
1147 if context and context.full_text.count("\n") > 1:
1148 # try to evaluate on full buffer
1149 previous_lines = "\n".join(
1150 context.full_text.split("\n")[: context.cursor_line]
1151 )
1152 if previous_lines:
1153 all_code_lines_before_cursor = (
1154 self._extract_code(previous_lines) + "\n" + text
1155 )
1156 context = EvaluationContext(
1157 globals=self.global_namespace,
1158 locals=self.namespace,
1159 evaluation=self.evaluation,
1160 auto_import=self._auto_import,
1161 policy_overrides=self.policy_overrides,
1162 )
1163 try:
1164 obj = guarded_eval(
1165 all_code_lines_before_cursor,
1166 context,
1167 )
1168 except Exception as e:
1169 if self.debug:
1170 warnings.warn(f"Evaluation exception {e}")
1171
1172 search_lists.append(list(context.transient_locals.keys()))
1173
1174 for lst in search_lists:
1175 for word in lst:
1176 if word[:n] == text and word != "__builtins__":
1177 match_append(word)
1178
1179 snake_case_re = re.compile(r"[^_]+(_[^_]+)+?\Z")
1180 for lst in [list(self.namespace.keys()), list(self.global_namespace.keys())]:
1181 shortened = {
1182 "_".join([sub[0] for sub in word.split("_")]): word
1183 for word in lst
1184 if snake_case_re.match(word)
1185 }
1186 for word in shortened.keys():
1187 if word[:n] == text and word != "__builtins__":
1188 match_append(shortened[word])
1189
1190 return matches
1191
1192 def attr_matches(self, text):
1193 """Compute matches when text contains a dot.
1194
1195 Assuming the text is of the form NAME.NAME....[NAME], and is
1196 evaluatable in self.namespace or self.global_namespace, it will be
1197 evaluated and its attributes (as revealed by dir()) are used as
1198 possible completions. (For class instances, class members are
1199 also considered.)
1200
1201 WARNING: this can still invoke arbitrary C code, if an object
1202 with a __getattr__ hook is evaluated.
1203
1204 """
1205 return self._attr_matches(text)[0]
1206
1207 # we simple attribute matching with normal identifiers.
1208 _ATTR_MATCH_RE = re.compile(r"(.+)\.(\w*)$")
1209
1210 def _strip_code_before_operator(self, code: str) -> str:
1211 o_parens = {"(", "[", "{"}
1212 c_parens = {")", "]", "}"}
1213
1214 # Dry-run tokenize to catch errors
1215 try:
1216 _ = list(tokenize.generate_tokens(iter(code.splitlines()).__next__))
1217 except tokenize.TokenError:
1218 # Try trimming the expression and retrying
1219 trimmed_code = self._trim_expr(code)
1220 try:
1221 _ = list(
1222 tokenize.generate_tokens(iter(trimmed_code.splitlines()).__next__)
1223 )
1224 code = trimmed_code
1225 except tokenize.TokenError:
1226 return code
1227
1228 tokens = _parse_tokens(code)
1229 encountered_operator = False
1230 after_operator = []
1231 nesting_level = 0
1232
1233 for t in tokens:
1234 if t.type == tokenize.OP:
1235 if t.string in o_parens:
1236 nesting_level += 1
1237 elif t.string in c_parens:
1238 nesting_level -= 1
1239 elif t.string != "." and nesting_level == 0:
1240 encountered_operator = True
1241 after_operator = []
1242 continue
1243
1244 if encountered_operator:
1245 after_operator.append(t.string)
1246
1247 if encountered_operator:
1248 return "".join(after_operator)
1249 else:
1250 return code
1251
1252 def _extract_code(self, line: str):
1253 """No-op in Completer, but can be used in subclasses to customise behaviour"""
1254 return line
1255
1256 def _attr_matches(
1257 self,
1258 text: str,
1259 include_prefix: bool = True,
1260 context: Optional[CompletionContext] = None,
1261 ) -> tuple[Sequence[str], str]:
1262 m2 = self._ATTR_MATCH_RE.match(text)
1263 if not m2:
1264 return [], ""
1265 expr, attr = m2.group(1, 2)
1266 try:
1267 expr = self._strip_code_before_operator(expr)
1268 except tokenize.TokenError:
1269 pass
1270
1271 obj = self._evaluate_expr(expr)
1272 if obj is not_found:
1273 if context:
1274 # try to evaluate on full buffer
1275 previous_lines = "\n".join(
1276 context.full_text.split("\n")[: context.cursor_line]
1277 )
1278 if previous_lines:
1279 all_code_lines_before_cursor = (
1280 self._extract_code(previous_lines) + "\n" + expr
1281 )
1282 obj = self._evaluate_expr(all_code_lines_before_cursor)
1283
1284 if obj is not_found:
1285 return [], ""
1286
1287 if self.limit_to__all__ and hasattr(obj, '__all__'):
1288 words = get__all__entries(obj)
1289 else:
1290 words = dir2(obj)
1291
1292 try:
1293 words = generics.complete_object(obj, words)
1294 except TryNext:
1295 pass
1296 except AssertionError:
1297 raise
1298 except Exception:
1299 # Silence errors from completion function
1300 pass
1301 # Build match list to return
1302 n = len(attr)
1303
1304 # Note: ideally we would just return words here and the prefix
1305 # reconciliator would know that we intend to append to rather than
1306 # replace the input text; this requires refactoring to return range
1307 # which ought to be replaced (as does jedi).
1308 if include_prefix:
1309 tokens = _parse_tokens(expr)
1310 rev_tokens = reversed(tokens)
1311 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1312 name_turn = True
1313
1314 parts = []
1315 for token in rev_tokens:
1316 if token.type in skip_over:
1317 continue
1318 if token.type == tokenize.NAME and name_turn:
1319 parts.append(token.string)
1320 name_turn = False
1321 elif (
1322 token.type == tokenize.OP and token.string == "." and not name_turn
1323 ):
1324 parts.append(token.string)
1325 name_turn = True
1326 else:
1327 # short-circuit if not empty nor name token
1328 break
1329
1330 prefix_after_space = "".join(reversed(parts))
1331 else:
1332 prefix_after_space = ""
1333
1334 return (
1335 ["%s.%s" % (prefix_after_space, w) for w in words if w[:n] == attr],
1336 "." + attr,
1337 )
1338
1339 def _trim_expr(self, code: str) -> str:
1340 """
1341 Trim the code until it is a valid expression and not a tuple;
1342
1343 return the trimmed expression for guarded_eval.
1344 """
1345 while code:
1346 code = code[1:]
1347 try:
1348 res = ast.parse(code)
1349 except SyntaxError:
1350 continue
1351
1352 assert res is not None
1353 if len(res.body) != 1:
1354 continue
1355 if not isinstance(res.body[0], ast.Expr):
1356 continue
1357 expr = res.body[0].value
1358 if isinstance(expr, ast.Tuple) and not code[-1] == ")":
1359 # we skip implicit tuple, like when trimming `fun(a,b`<completion>
1360 # as `a,b` would be a tuple, and we actually expect to get only `b`
1361 continue
1362 return code
1363 return ""
1364
1365 def _evaluate_expr(self, expr):
1366 obj = not_found
1367 done = False
1368 while not done and expr:
1369 try:
1370 obj = guarded_eval(
1371 expr,
1372 EvaluationContext(
1373 globals=self.global_namespace,
1374 locals=self.namespace,
1375 evaluation=self.evaluation,
1376 auto_import=self._auto_import,
1377 policy_overrides=self.policy_overrides,
1378 ),
1379 )
1380 done = True
1381 except (SyntaxError, TypeError) as e:
1382 if self.debug:
1383 warnings.warn(f"Trimming because of {e}")
1384 # TypeError can show up with something like `+ d`
1385 # where `d` is a dictionary.
1386
1387 # trim the expression to remove any invalid prefix
1388 # e.g. user starts `(d[`, so we get `expr = '(d'`,
1389 # where parenthesis is not closed.
1390 # TODO: make this faster by reusing parts of the computation?
1391 expr = self._trim_expr(expr)
1392 except Exception as e:
1393 if self.debug:
1394 warnings.warn(f"Evaluation exception {e}")
1395 done = True
1396 if self.debug:
1397 warnings.warn(f"Resolved to {obj}")
1398 return obj
1399
1400 @property
1401 def _auto_import(self):
1402 if self.auto_import_method is None:
1403 return None
1404 if not hasattr(self, "_auto_import_func"):
1405 self._auto_import_func = import_item(self.auto_import_method)
1406 return self._auto_import_func
1407
1408
1409def get__all__entries(obj):
1410 """returns the strings in the __all__ attribute"""
1411 try:
1412 words = getattr(obj, '__all__')
1413 except Exception:
1414 return []
1415
1416 return [w for w in words if isinstance(w, str)]
1417
1418
1419class _DictKeyState(enum.Flag):
1420 """Represent state of the key match in context of other possible matches.
1421
1422 - given `d1 = {'a': 1}` completion on `d1['<tab>` will yield `{'a': END_OF_ITEM}` as there is no tuple.
1423 - given `d2 = {('a', 'b'): 1}`: `d2['a', '<tab>` will yield `{'b': END_OF_TUPLE}` as there is no tuple members to add beyond `'b'`.
1424 - given `d3 = {('a', 'b'): 1}`: `d3['<tab>` will yield `{'a': IN_TUPLE}` as `'a'` can be added.
1425 - given `d4 = {'a': 1, ('a', 'b'): 2}`: `d4['<tab>` will yield `{'a': END_OF_ITEM & END_OF_TUPLE}`
1426 """
1427
1428 BASELINE = 0
1429 END_OF_ITEM = enum.auto()
1430 END_OF_TUPLE = enum.auto()
1431 IN_TUPLE = enum.auto()
1432
1433
1434def _parse_tokens(c):
1435 """Parse tokens even if there is an error."""
1436 tokens = []
1437 token_generator = tokenize.generate_tokens(iter(c.splitlines()).__next__)
1438 while True:
1439 try:
1440 tokens.append(next(token_generator))
1441 except tokenize.TokenError:
1442 return tokens
1443 except StopIteration:
1444 return tokens
1445
1446
1447def _match_number_in_dict_key_prefix(prefix: str) -> Union[str, None]:
1448 """Match any valid Python numeric literal in a prefix of dictionary keys.
1449
1450 References:
1451 - https://docs.python.org/3/reference/lexical_analysis.html#numeric-literals
1452 - https://docs.python.org/3/library/tokenize.html
1453 """
1454 if prefix[-1].isspace():
1455 # if user typed a space we do not have anything to complete
1456 # even if there was a valid number token before
1457 return None
1458 tokens = _parse_tokens(prefix)
1459 rev_tokens = reversed(tokens)
1460 skip_over = {tokenize.ENDMARKER, tokenize.NEWLINE}
1461 number = None
1462 for token in rev_tokens:
1463 if token.type in skip_over:
1464 continue
1465 if number is None:
1466 if token.type == tokenize.NUMBER:
1467 number = token.string
1468 continue
1469 else:
1470 # we did not match a number
1471 return None
1472 if token.type == tokenize.OP:
1473 if token.string == ",":
1474 break
1475 if token.string in {"+", "-"}:
1476 number = token.string + number
1477 else:
1478 return None
1479 return number
1480
1481
1482_INT_FORMATS = {
1483 "0b": bin,
1484 "0o": oct,
1485 "0x": hex,
1486}
1487
1488
1489def match_dict_keys(
1490 keys: list[Union[str, bytes, tuple[Union[str, bytes], ...]]],
1491 prefix: str,
1492 delims: str,
1493 extra_prefix: Optional[tuple[Union[str, bytes], ...]] = None,
1494) -> tuple[str, int, dict[str, _DictKeyState]]:
1495 """Used by dict_key_matches, matching the prefix to a list of keys
1496
1497 Parameters
1498 ----------
1499 keys
1500 list of keys in dictionary currently being completed.
1501 prefix
1502 Part of the text already typed by the user. E.g. `mydict[b'fo`
1503 delims
1504 String of delimiters to consider when finding the current key.
1505 extra_prefix : optional
1506 Part of the text already typed in multi-key index cases. E.g. for
1507 `mydict['foo', "bar", 'b`, this would be `('foo', 'bar')`.
1508
1509 Returns
1510 -------
1511 A tuple of three elements: ``quote``, ``token_start``, ``matched``, with
1512 ``quote`` being the quote that need to be used to close current string.
1513 ``token_start`` the position where the replacement should start occurring,
1514 ``matches`` a dictionary of replacement/completion keys on keys and values
1515 indicating whether the state.
1516 """
1517 prefix_tuple = extra_prefix if extra_prefix else ()
1518
1519 prefix_tuple_size = sum(
1520 [
1521 # for pandas, do not count slices as taking space
1522 not isinstance(k, slice)
1523 for k in prefix_tuple
1524 ]
1525 )
1526 text_serializable_types = (str, bytes, int, float, slice)
1527
1528 def filter_prefix_tuple(key):
1529 # Reject too short keys
1530 if len(key) <= prefix_tuple_size:
1531 return False
1532 # Reject keys which cannot be serialised to text
1533 for k in key:
1534 if not isinstance(k, text_serializable_types):
1535 return False
1536 # Reject keys that do not match the prefix
1537 for k, pt in zip(key, prefix_tuple):
1538 if k != pt and not isinstance(pt, slice):
1539 return False
1540 # All checks passed!
1541 return True
1542
1543 filtered_key_is_final: dict[
1544 Union[str, bytes, int, float], _DictKeyState
1545 ] = defaultdict(lambda: _DictKeyState.BASELINE)
1546
1547 for k in keys:
1548 # If at least one of the matches is not final, mark as undetermined.
1549 # This can happen with `d = {111: 'b', (111, 222): 'a'}` where
1550 # `111` appears final on first match but is not final on the second.
1551
1552 if isinstance(k, tuple):
1553 if filter_prefix_tuple(k):
1554 key_fragment = k[prefix_tuple_size]
1555 filtered_key_is_final[key_fragment] |= (
1556 _DictKeyState.END_OF_TUPLE
1557 if len(k) == prefix_tuple_size + 1
1558 else _DictKeyState.IN_TUPLE
1559 )
1560 elif prefix_tuple_size > 0:
1561 # we are completing a tuple but this key is not a tuple,
1562 # so we should ignore it
1563 pass
1564 else:
1565 if isinstance(k, text_serializable_types):
1566 filtered_key_is_final[k] |= _DictKeyState.END_OF_ITEM
1567
1568 filtered_keys = filtered_key_is_final.keys()
1569
1570 if not prefix:
1571 return "", 0, {repr(k): v for k, v in filtered_key_is_final.items()}
1572
1573 quote_match = re.search("(?:\"|')", prefix)
1574 is_user_prefix_numeric = False
1575
1576 if quote_match:
1577 quote = quote_match.group()
1578 valid_prefix = prefix + quote
1579 try:
1580 prefix_str = literal_eval(valid_prefix)
1581 except Exception:
1582 return "", 0, {}
1583 else:
1584 # If it does not look like a string, let's assume
1585 # we are dealing with a number or variable.
1586 number_match = _match_number_in_dict_key_prefix(prefix)
1587
1588 # We do not want the key matcher to suggest variable names so we yield:
1589 if number_match is None:
1590 # The alternative would be to assume that user forgort the quote
1591 # and if the substring matches, suggest adding it at the start.
1592 return "", 0, {}
1593
1594 prefix_str = number_match
1595 is_user_prefix_numeric = True
1596 quote = ""
1597
1598 pattern = '[^' + ''.join('\\' + c for c in delims) + ']*$'
1599 token_match = re.search(pattern, prefix, re.UNICODE)
1600 assert token_match is not None # silence mypy
1601 token_start = token_match.start()
1602 token_prefix = token_match.group()
1603
1604 matched: dict[str, _DictKeyState] = {}
1605
1606 str_key: Union[str, bytes]
1607
1608 for key in filtered_keys:
1609 if isinstance(key, (int, float)):
1610 # User typed a number but this key is not a number.
1611 if not is_user_prefix_numeric:
1612 continue
1613 str_key = str(key)
1614 if isinstance(key, int):
1615 int_base = prefix_str[:2].lower()
1616 # if user typed integer using binary/oct/hex notation:
1617 if int_base in _INT_FORMATS:
1618 int_format = _INT_FORMATS[int_base]
1619 str_key = int_format(key)
1620 else:
1621 # User typed a string but this key is a number.
1622 if is_user_prefix_numeric:
1623 continue
1624 str_key = key
1625 try:
1626 if not str_key.startswith(prefix_str):
1627 continue
1628 except (AttributeError, TypeError, UnicodeError):
1629 # Python 3+ TypeError on b'a'.startswith('a') or vice-versa
1630 continue
1631
1632 # reformat remainder of key to begin with prefix
1633 rem = str_key[len(prefix_str) :]
1634 # force repr wrapped in '
1635 rem_repr = repr(rem + '"') if isinstance(rem, str) else repr(rem + b'"')
1636 rem_repr = rem_repr[1 + rem_repr.index("'"):-2]
1637 if quote == '"':
1638 # The entered prefix is quoted with ",
1639 # but the match is quoted with '.
1640 # A contained " hence needs escaping for comparison:
1641 rem_repr = rem_repr.replace('"', '\\"')
1642
1643 # then reinsert prefix from start of token
1644 match = "%s%s" % (token_prefix, rem_repr)
1645
1646 matched[match] = filtered_key_is_final[key]
1647 return quote, token_start, matched
1648
1649
1650def cursor_to_position(text:str, line:int, column:int)->int:
1651 """
1652 Convert the (line,column) position of the cursor in text to an offset in a
1653 string.
1654
1655 Parameters
1656 ----------
1657 text : str
1658 The text in which to calculate the cursor offset
1659 line : int
1660 Line of the cursor; 0-indexed
1661 column : int
1662 Column of the cursor 0-indexed
1663
1664 Returns
1665 -------
1666 Position of the cursor in ``text``, 0-indexed.
1667
1668 See Also
1669 --------
1670 position_to_cursor : reciprocal of this function
1671
1672 """
1673 lines = text.split('\n')
1674 assert line <= len(lines), '{} <= {}'.format(str(line), str(len(lines)))
1675
1676 return sum(len(line) + 1 for line in lines[:line]) + column
1677
1678
1679def position_to_cursor(text: str, offset: int) -> tuple[int, int]:
1680 """
1681 Convert the position of the cursor in text (0 indexed) to a line
1682 number(0-indexed) and a column number (0-indexed) pair
1683
1684 Position should be a valid position in ``text``.
1685
1686 Parameters
1687 ----------
1688 text : str
1689 The text in which to calculate the cursor offset
1690 offset : int
1691 Position of the cursor in ``text``, 0-indexed.
1692
1693 Returns
1694 -------
1695 (line, column) : (int, int)
1696 Line of the cursor; 0-indexed, column of the cursor 0-indexed
1697
1698 See Also
1699 --------
1700 cursor_to_position : reciprocal of this function
1701
1702 """
1703
1704 assert 0 <= offset <= len(text) , "0 <= %s <= %s" % (offset , len(text))
1705
1706 before = text[:offset]
1707 blines = before.split('\n') # ! splitnes trim trailing \n
1708 line = before.count('\n')
1709 col = len(blines[-1])
1710 return line, col
1711
1712
1713def _safe_isinstance(obj, module, class_name, *attrs):
1714 """Checks if obj is an instance of module.class_name if loaded
1715 """
1716 if module in sys.modules:
1717 m = sys.modules[module]
1718 for attr in [class_name, *attrs]:
1719 m = getattr(m, attr)
1720 return isinstance(obj, m)
1721
1722
1723@context_matcher()
1724def back_unicode_name_matcher(context: CompletionContext):
1725 """Match Unicode characters back to Unicode name
1726
1727 Same as :any:`back_unicode_name_matches`, but adopted to new Matcher API.
1728 """
1729 fragment, matches = back_unicode_name_matches(context.text_until_cursor)
1730 return _convert_matcher_v1_result_to_v2(
1731 matches, type="unicode", fragment=fragment, suppress_if_matches=True
1732 )
1733
1734
1735def back_unicode_name_matches(text: str) -> tuple[str, Sequence[str]]:
1736 """Match Unicode characters back to Unicode name
1737
1738 This does ``☃`` -> ``\\snowman``
1739
1740 Note that snowman is not a valid python3 combining character but will be expanded.
1741 Though it will not recombine back to the snowman character by the completion machinery.
1742
1743 This will not either back-complete standard sequences like \\n, \\b ...
1744
1745 .. deprecated:: 8.6
1746 You can use :meth:`back_unicode_name_matcher` instead.
1747
1748 Returns
1749 =======
1750
1751 Return a tuple with two elements:
1752
1753 - The Unicode character that was matched (preceded with a backslash), or
1754 empty string,
1755 - a sequence (of 1), name for the match Unicode character, preceded by
1756 backslash, or empty if no match.
1757 """
1758 if len(text)<2:
1759 return '', ()
1760 maybe_slash = text[-2]
1761 if maybe_slash != '\\':
1762 return '', ()
1763
1764 char = text[-1]
1765 # no expand on quote for completion in strings.
1766 # nor backcomplete standard ascii keys
1767 if char in string.ascii_letters or char in ('"',"'"):
1768 return '', ()
1769 try :
1770 unic = unicodedata.name(char)
1771 return '\\'+char,('\\'+unic,)
1772 except KeyError:
1773 pass
1774 return '', ()
1775
1776
1777@context_matcher()
1778def back_latex_name_matcher(context: CompletionContext) -> SimpleMatcherResult:
1779 """Match latex characters back to unicode name
1780
1781 This does ``\\ℵ`` -> ``\\aleph``
1782 """
1783
1784 text = context.text_until_cursor
1785 no_match = {
1786 "completions": [],
1787 "suppress": False,
1788 }
1789
1790 if len(text)<2:
1791 return no_match
1792 maybe_slash = text[-2]
1793 if maybe_slash != '\\':
1794 return no_match
1795
1796 char = text[-1]
1797 # no expand on quote for completion in strings.
1798 # nor backcomplete standard ascii keys
1799 if char in string.ascii_letters or char in ('"',"'"):
1800 return no_match
1801 try :
1802 latex = reverse_latex_symbol[char]
1803 # '\\' replace the \ as well
1804 return {
1805 "completions": [SimpleCompletion(text=latex, type="latex")],
1806 "suppress": True,
1807 "matched_fragment": "\\" + char,
1808 }
1809 except KeyError:
1810 pass
1811
1812 return no_match
1813
1814def _formatparamchildren(parameter) -> str:
1815 """
1816 Get parameter name and value from Jedi Private API
1817
1818 Jedi does not expose a simple way to get `param=value` from its API.
1819
1820 Parameters
1821 ----------
1822 parameter
1823 Jedi's function `Param`
1824
1825 Returns
1826 -------
1827 A string like 'a', 'b=1', '*args', '**kwargs'
1828
1829 """
1830 description = parameter.description
1831 if not description.startswith('param '):
1832 raise ValueError('Jedi function parameter description have change format.'
1833 'Expected "param ...", found %r".' % description)
1834 return description[6:]
1835
1836def _make_signature(completion)-> str:
1837 """
1838 Make the signature from a jedi completion
1839
1840 Parameters
1841 ----------
1842 completion : jedi.Completion
1843 object does not complete a function type
1844
1845 Returns
1846 -------
1847 a string consisting of the function signature, with the parenthesis but
1848 without the function name. example:
1849 `(a, *args, b=1, **kwargs)`
1850
1851 """
1852
1853 # it looks like this might work on jedi 0.17
1854 if hasattr(completion, 'get_signatures'):
1855 signatures = completion.get_signatures()
1856 if not signatures:
1857 return '(?)'
1858
1859 c0 = completion.get_signatures()[0]
1860 return '('+c0.to_string().split('(', maxsplit=1)[1]
1861
1862 return '(%s)'% ', '.join([f for f in (_formatparamchildren(p) for signature in completion.get_signatures()
1863 for p in signature.defined_names()) if f])
1864
1865
1866_CompleteResult = dict[str, MatcherResult]
1867
1868
1869DICT_MATCHER_REGEX = re.compile(
1870 r"""(?x)
1871( # match dict-referring - or any get item object - expression
1872 .+
1873)
1874\[ # open bracket
1875\s* # and optional whitespace
1876# Capture any number of serializable objects (e.g. "a", "b", 'c')
1877# and slices
1878((?:(?:
1879 (?: # closed string
1880 [uUbB]? # string prefix (r not handled)
1881 (?:
1882 '(?:[^']|(?<!\\)\\')*'
1883 |
1884 "(?:[^"]|(?<!\\)\\")*"
1885 )
1886 )
1887 |
1888 # capture integers and slices
1889 (?:[-+]?\d+)?(?::(?:[-+]?\d+)?){0,2}
1890 |
1891 # integer in bin/hex/oct notation
1892 0[bBxXoO]_?(?:\w|\d)+
1893 )
1894 \s*,\s*
1895)*)
1896((?:
1897 (?: # unclosed string
1898 [uUbB]? # string prefix (r not handled)
1899 (?:
1900 '(?:[^']|(?<!\\)\\')*
1901 |
1902 "(?:[^"]|(?<!\\)\\")*
1903 )
1904 )
1905 |
1906 # unfinished integer
1907 (?:[-+]?\d+)
1908 |
1909 # integer in bin/hex/oct notation
1910 0[bBxXoO]_?(?:\w|\d)+
1911 )
1912)?
1913$
1914"""
1915)
1916
1917
1918def _convert_matcher_v1_result_to_v2_no_no(
1919 matches: Sequence[str],
1920 type: str,
1921) -> SimpleMatcherResult:
1922 """same as _convert_matcher_v1_result_to_v2 but fragment=None, and suppress_if_matches is False by construction"""
1923 return SimpleMatcherResult(
1924 completions=[SimpleCompletion(text=match, type=type) for match in matches],
1925 suppress=False,
1926 )
1927
1928
1929def _convert_matcher_v1_result_to_v2(
1930 matches: Sequence[str],
1931 type: str,
1932 fragment: Optional[str] = None,
1933 suppress_if_matches: bool = False,
1934) -> SimpleMatcherResult:
1935 """Utility to help with transition"""
1936 result = {
1937 "completions": [SimpleCompletion(text=match, type=type) for match in matches],
1938 "suppress": (True if matches else False) if suppress_if_matches else False,
1939 }
1940 if fragment is not None:
1941 result["matched_fragment"] = fragment
1942 return cast(SimpleMatcherResult, result)
1943
1944
1945class IPCompleter(Completer):
1946 """Extension of the completer class with IPython-specific features"""
1947
1948 @observe('greedy')
1949 def _greedy_changed(self, change):
1950 """update the splitter and readline delims when greedy is changed"""
1951 if change["new"]:
1952 self.evaluation = "unsafe"
1953 self.auto_close_dict_keys = True
1954 self.splitter.delims = GREEDY_DELIMS
1955 else:
1956 self.evaluation = "limited"
1957 self.auto_close_dict_keys = False
1958 self.splitter.delims = DELIMS
1959
1960 dict_keys_only = Bool(
1961 False,
1962 help="""
1963 Whether to show dict key matches only.
1964
1965 (disables all matchers except for `IPCompleter.dict_key_matcher`).
1966 """,
1967 )
1968
1969 suppress_competing_matchers = UnionTrait(
1970 [Bool(allow_none=True), DictTrait(Bool(None, allow_none=True))],
1971 default_value=None,
1972 help="""
1973 Whether to suppress completions from other *Matchers*.
1974
1975 When set to ``None`` (default) the matchers will attempt to auto-detect
1976 whether suppression of other matchers is desirable. For example, at
1977 the beginning of a line followed by `%` we expect a magic completion
1978 to be the only applicable option, and after ``my_dict['`` we usually
1979 expect a completion with an existing dictionary key.
1980
1981 If you want to disable this heuristic and see completions from all matchers,
1982 set ``IPCompleter.suppress_competing_matchers = False``.
1983 To disable the heuristic for specific matchers provide a dictionary mapping:
1984 ``IPCompleter.suppress_competing_matchers = {'IPCompleter.dict_key_matcher': False}``.
1985
1986 Set ``IPCompleter.suppress_competing_matchers = True`` to limit
1987 completions to the set of matchers with the highest priority;
1988 this is equivalent to ``IPCompleter.merge_completions`` and
1989 can be beneficial for performance, but will sometimes omit relevant
1990 candidates from matchers further down the priority list.
1991 """,
1992 ).tag(config=True)
1993
1994 merge_completions = Bool(
1995 True,
1996 help="""Whether to merge completion results into a single list
1997
1998 If False, only the completion results from the first non-empty
1999 completer will be returned.
2000
2001 As of version 8.6.0, setting the value to ``False`` is an alias for:
2002 ``IPCompleter.suppress_competing_matchers = True.``.
2003 """,
2004 ).tag(config=True)
2005
2006 disable_matchers = ListTrait(
2007 Unicode(),
2008 help="""List of matchers to disable.
2009
2010 The list should contain matcher identifiers (see :any:`completion_matcher`).
2011 """,
2012 ).tag(config=True)
2013
2014 omit__names = Enum(
2015 (0, 1, 2),
2016 default_value=2,
2017 help="""Instruct the completer to omit private method names
2018
2019 Specifically, when completing on ``object.<tab>``.
2020
2021 When 2 [default]: all names that start with '_' will be excluded.
2022
2023 When 1: all 'magic' names (``__foo__``) will be excluded.
2024
2025 When 0: nothing will be excluded.
2026 """
2027 ).tag(config=True)
2028 limit_to__all__ = Bool(False,
2029 help="""
2030 DEPRECATED as of version 5.0.
2031
2032 Instruct the completer to use __all__ for the completion
2033
2034 Specifically, when completing on ``object.<tab>``.
2035
2036 When True: only those names in obj.__all__ will be included.
2037
2038 When False [default]: the __all__ attribute is ignored
2039 """,
2040 ).tag(config=True)
2041
2042 profile_completions = Bool(
2043 default_value=False,
2044 help="If True, emit profiling data for completion subsystem using cProfile."
2045 ).tag(config=True)
2046
2047 profiler_output_dir = Unicode(
2048 default_value=".completion_profiles",
2049 help="Template for path at which to output profile data for completions."
2050 ).tag(config=True)
2051
2052 @observe('limit_to__all__')
2053 def _limit_to_all_changed(self, change):
2054 warnings.warn('`IPython.core.IPCompleter.limit_to__all__` configuration '
2055 'value has been deprecated since IPython 5.0, will be made to have '
2056 'no effects and then removed in future version of IPython.',
2057 UserWarning)
2058
2059 def __init__(
2060 self, shell=None, namespace=None, global_namespace=None, config=None, **kwargs
2061 ):
2062 """IPCompleter() -> completer
2063
2064 Return a completer object.
2065
2066 Parameters
2067 ----------
2068 shell
2069 a pointer to the ipython shell itself. This is needed
2070 because this completer knows about magic functions, and those can
2071 only be accessed via the ipython instance.
2072 namespace : dict, optional
2073 an optional dict where completions are performed.
2074 global_namespace : dict, optional
2075 secondary optional dict for completions, to
2076 handle cases (such as IPython embedded inside functions) where
2077 both Python scopes are visible.
2078 config : Config
2079 traitlet's config object
2080 **kwargs
2081 passed to super class unmodified.
2082 """
2083
2084 self.magic_escape = ESC_MAGIC
2085 self.splitter = CompletionSplitter()
2086
2087 # _greedy_changed() depends on splitter and readline being defined:
2088 super().__init__(
2089 namespace=namespace,
2090 global_namespace=global_namespace,
2091 config=config,
2092 **kwargs,
2093 )
2094
2095 # List where completion matches will be stored
2096 self.matches = []
2097 self.shell = shell
2098 # Regexp to split filenames with spaces in them
2099 self.space_name_re = re.compile(r'([^\\] )')
2100 # Hold a local ref. to glob.glob for speed
2101 self.glob = glob.glob
2102
2103 # Determine if we are running on 'dumb' terminals, like (X)Emacs
2104 # buffers, to avoid completion problems.
2105 term = os.environ.get('TERM','xterm')
2106 self.dumb_terminal = term in ['dumb','emacs']
2107
2108 # Special handling of backslashes needed in win32 platforms
2109 if sys.platform == "win32":
2110 self.clean_glob = self._clean_glob_win32
2111 else:
2112 self.clean_glob = self._clean_glob
2113
2114 #regexp to parse docstring for function signature
2115 self.docstring_sig_re = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2116 self.docstring_kwd_re = re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2117 #use this if positional argument name is also needed
2118 #= re.compile(r'[\s|\[]*(\w+)(?:\s*=?\s*.*)')
2119
2120 self.magic_arg_matchers = [
2121 self.magic_config_matcher,
2122 self.magic_color_matcher,
2123 ]
2124
2125 # This is set externally by InteractiveShell
2126 self.custom_completers = None
2127
2128 # This is a list of names of unicode characters that can be completed
2129 # into their corresponding unicode value. The list is large, so we
2130 # lazily initialize it on first use. Consuming code should access this
2131 # attribute through the `@unicode_names` property.
2132 self._unicode_names = None
2133
2134 self._backslash_combining_matchers = [
2135 self.latex_name_matcher,
2136 self.unicode_name_matcher,
2137 back_latex_name_matcher,
2138 back_unicode_name_matcher,
2139 self.fwd_unicode_matcher,
2140 ]
2141
2142 if not self.backslash_combining_completions:
2143 for matcher in self._backslash_combining_matchers:
2144 self.disable_matchers.append(_get_matcher_id(matcher))
2145
2146 if not self.merge_completions:
2147 self.suppress_competing_matchers = True
2148
2149 @property
2150 def matchers(self) -> list[Matcher]:
2151 """All active matcher routines for completion"""
2152 if self.dict_keys_only:
2153 return [self.dict_key_matcher]
2154
2155 if self.use_jedi:
2156 return [
2157 *self.custom_matchers,
2158 *self._backslash_combining_matchers,
2159 *self.magic_arg_matchers,
2160 self.custom_completer_matcher,
2161 self.magic_matcher,
2162 self._jedi_matcher,
2163 self.dict_key_matcher,
2164 self.file_matcher,
2165 ]
2166 else:
2167 return [
2168 *self.custom_matchers,
2169 *self._backslash_combining_matchers,
2170 *self.magic_arg_matchers,
2171 self.custom_completer_matcher,
2172 self.dict_key_matcher,
2173 self.magic_matcher,
2174 self.python_matcher,
2175 self.file_matcher,
2176 self.python_func_kw_matcher,
2177 ]
2178
2179 def all_completions(self, text: str) -> list[str]:
2180 """
2181 Wrapper around the completion methods for the benefit of emacs.
2182 """
2183 prefix = text.rpartition('.')[0]
2184 with provisionalcompleter():
2185 return ['.'.join([prefix, c.text]) if prefix and self.use_jedi else c.text
2186 for c in self.completions(text, len(text))]
2187
2188 return self.complete(text)[1]
2189
2190 def _clean_glob(self, text:str):
2191 return self.glob("%s*" % text)
2192
2193 def _clean_glob_win32(self, text:str):
2194 return [f.replace("\\","/")
2195 for f in self.glob("%s*" % text)]
2196
2197 @context_matcher()
2198 def file_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2199 """Match filenames, expanding ~USER type strings.
2200
2201 Most of the seemingly convoluted logic in this completer is an
2202 attempt to handle filenames with spaces in them. And yet it's not
2203 quite perfect, because Python's readline doesn't expose all of the
2204 GNU readline details needed for this to be done correctly.
2205
2206 For a filename with a space in it, the printed completions will be
2207 only the parts after what's already been typed (instead of the
2208 full completions, as is normally done). I don't think with the
2209 current (as of Python 2.3) Python readline it's possible to do
2210 better.
2211 """
2212 # TODO: add a heuristic for suppressing (e.g. if it has OS-specific delimiter,
2213 # starts with `/home/`, `C:\`, etc)
2214
2215 text = context.token
2216 code_until_cursor = self._extract_code(context.text_until_cursor)
2217 completion_type = self._determine_completion_context(code_until_cursor)
2218 in_cli_context = self._is_completing_in_cli_context(code_until_cursor)
2219 if (
2220 completion_type == self._CompletionContextType.ATTRIBUTE
2221 and not in_cli_context
2222 ):
2223 return {
2224 "completions": [],
2225 "suppress": False,
2226 }
2227
2228 # chars that require escaping with backslash - i.e. chars
2229 # that readline treats incorrectly as delimiters, but we
2230 # don't want to treat as delimiters in filename matching
2231 # when escaped with backslash
2232 if text.startswith('!'):
2233 text = text[1:]
2234 text_prefix = u'!'
2235 else:
2236 text_prefix = u''
2237
2238 text_until_cursor = self.text_until_cursor
2239 # track strings with open quotes
2240 open_quotes = has_open_quotes(text_until_cursor)
2241
2242 if '(' in text_until_cursor or '[' in text_until_cursor:
2243 lsplit = text
2244 else:
2245 try:
2246 # arg_split ~ shlex.split, but with unicode bugs fixed by us
2247 lsplit = arg_split(text_until_cursor)[-1]
2248 except ValueError:
2249 # typically an unmatched ", or backslash without escaped char.
2250 if open_quotes:
2251 lsplit = text_until_cursor.split(open_quotes)[-1]
2252 else:
2253 return {
2254 "completions": [],
2255 "suppress": False,
2256 }
2257 except IndexError:
2258 # tab pressed on empty line
2259 lsplit = ""
2260
2261 if not open_quotes and lsplit != protect_filename(lsplit):
2262 # if protectables are found, do matching on the whole escaped name
2263 has_protectables = True
2264 text0,text = text,lsplit
2265 else:
2266 has_protectables = False
2267 text = os.path.expanduser(text)
2268
2269 if text == "":
2270 return {
2271 "completions": [
2272 SimpleCompletion(
2273 text=text_prefix + protect_filename(f), type="path"
2274 )
2275 for f in self.glob("*")
2276 ],
2277 "suppress": False,
2278 }
2279
2280 # Compute the matches from the filesystem
2281 if sys.platform == 'win32':
2282 m0 = self.clean_glob(text)
2283 else:
2284 m0 = self.clean_glob(text.replace('\\', ''))
2285
2286 if has_protectables:
2287 # If we had protectables, we need to revert our changes to the
2288 # beginning of filename so that we don't double-write the part
2289 # of the filename we have so far
2290 len_lsplit = len(lsplit)
2291 matches = [text_prefix + text0 +
2292 protect_filename(f[len_lsplit:]) for f in m0]
2293 else:
2294 if open_quotes:
2295 # if we have a string with an open quote, we don't need to
2296 # protect the names beyond the quote (and we _shouldn't_, as
2297 # it would cause bugs when the filesystem call is made).
2298 matches = m0 if sys.platform == "win32" else\
2299 [protect_filename(f, open_quotes) for f in m0]
2300 else:
2301 matches = [text_prefix +
2302 protect_filename(f) for f in m0]
2303
2304 # Mark directories in input list by appending '/' to their names.
2305 return {
2306 "completions": [
2307 SimpleCompletion(text=x + "/" if os.path.isdir(x) else x, type="path")
2308 for x in matches
2309 ],
2310 "suppress": False,
2311 }
2312
2313 def _extract_code(self, line: str) -> str:
2314 """Extract code from magics if any."""
2315
2316 if not line:
2317 return line
2318 maybe_magic, *rest = line.split(maxsplit=1)
2319 if not rest:
2320 return line
2321 args = rest[0]
2322 known_magics = self.shell.magics_manager.lsmagic()
2323 line_magics = known_magics["line"]
2324 magic_name = maybe_magic.lstrip(self.magic_escape)
2325 if magic_name not in line_magics:
2326 return line
2327
2328 if not maybe_magic.startswith(self.magic_escape):
2329 all_variables = [*self.namespace.keys(), *self.global_namespace.keys()]
2330 if magic_name in all_variables:
2331 # short circuit if we see a line starting with say `time`
2332 # but time is defined as a variable (in addition to being
2333 # a magic). In these cases users need to use explicit `%time`.
2334 return line
2335
2336 magic_method = line_magics[magic_name]
2337
2338 try:
2339 if magic_name == "timeit":
2340 opts, stmt = magic_method.__self__.parse_options(
2341 args,
2342 "n:r:tcp:qov:",
2343 posix=False,
2344 strict=False,
2345 preserve_non_opts=True,
2346 )
2347 return stmt
2348 elif magic_name == "prun":
2349 opts, stmt = magic_method.__self__.parse_options(
2350 args, "D:l:rs:T:q", list_all=True, posix=False
2351 )
2352 return stmt
2353 elif hasattr(magic_method, "parser") and getattr(
2354 magic_method, "has_arguments", False
2355 ):
2356 # e.g. %debug, %time
2357 args, extra = magic_method.parser.parse_argstring(args, partial=True)
2358 return " ".join(extra)
2359 except UsageError:
2360 return line
2361
2362 return line
2363
2364 @context_matcher()
2365 def magic_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2366 """Match magics."""
2367
2368 # Get all shell magics now rather than statically, so magics loaded at
2369 # runtime show up too.
2370 text = context.token
2371 lsm = self.shell.magics_manager.lsmagic()
2372 line_magics = lsm['line']
2373 cell_magics = lsm['cell']
2374 pre = self.magic_escape
2375 pre2 = pre + pre
2376
2377 explicit_magic = text.startswith(pre)
2378
2379 # Completion logic:
2380 # - user gives %%: only do cell magics
2381 # - user gives %: do both line and cell magics
2382 # - no prefix: do both
2383 # In other words, line magics are skipped if the user gives %% explicitly
2384 #
2385 # We also exclude magics that match any currently visible names:
2386 # https://github.com/ipython/ipython/issues/4877, unless the user has
2387 # typed a %:
2388 # https://github.com/ipython/ipython/issues/10754
2389 bare_text = text.lstrip(pre)
2390 global_matches = self.global_matches(bare_text)
2391 if not explicit_magic:
2392 def matches(magic):
2393 """
2394 Filter magics, in particular remove magics that match
2395 a name present in global namespace.
2396 """
2397 return ( magic.startswith(bare_text) and
2398 magic not in global_matches )
2399 else:
2400 def matches(magic):
2401 return magic.startswith(bare_text)
2402
2403 completions = [pre2 + m for m in cell_magics if matches(m)]
2404 if not text.startswith(pre2):
2405 completions += [pre + m for m in line_magics if matches(m)]
2406
2407 is_magic_prefix = len(text) > 0 and text[0] == "%"
2408
2409 return {
2410 "completions": [
2411 SimpleCompletion(text=comp, type="magic") for comp in completions
2412 ],
2413 "suppress": is_magic_prefix and len(completions) > 0,
2414 }
2415
2416 @context_matcher()
2417 def magic_config_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2418 """Match class names and attributes for %config magic."""
2419 # NOTE: uses `line_buffer` equivalent for compatibility
2420 matches = self.magic_config_matches(context.line_with_cursor)
2421 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param")
2422
2423 def magic_config_matches(self, text: str) -> list[str]:
2424 """Match class names and attributes for %config magic.
2425
2426 .. deprecated:: 8.6
2427 You can use :meth:`magic_config_matcher` instead.
2428 """
2429 texts = text.strip().split()
2430
2431 if len(texts) > 0 and (texts[0] == 'config' or texts[0] == '%config'):
2432 # get all configuration classes
2433 classes = sorted(set([ c for c in self.shell.configurables
2434 if c.__class__.class_traits(config=True)
2435 ]), key=lambda x: x.__class__.__name__)
2436 classnames = [ c.__class__.__name__ for c in classes ]
2437
2438 # return all classnames if config or %config is given
2439 if len(texts) == 1:
2440 return classnames
2441
2442 # match classname
2443 classname_texts = texts[1].split('.')
2444 classname = classname_texts[0]
2445 classname_matches = [ c for c in classnames
2446 if c.startswith(classname) ]
2447
2448 # return matched classes or the matched class with attributes
2449 if texts[1].find('.') < 0:
2450 return classname_matches
2451 elif len(classname_matches) == 1 and \
2452 classname_matches[0] == classname:
2453 cls = classes[classnames.index(classname)].__class__
2454 help = cls.class_get_help()
2455 # strip leading '--' from cl-args:
2456 help = re.sub(re.compile(r'^--', re.MULTILINE), '', help)
2457 return [ attr.split('=')[0]
2458 for attr in help.strip().splitlines()
2459 if attr.startswith(texts[1]) ]
2460 return []
2461
2462 @context_matcher()
2463 def magic_color_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2464 """Match color schemes for %colors magic."""
2465 text = context.line_with_cursor
2466 texts = text.split()
2467 if text.endswith(' '):
2468 # .split() strips off the trailing whitespace. Add '' back
2469 # so that: '%colors ' -> ['%colors', '']
2470 texts.append('')
2471
2472 if len(texts) == 2 and (texts[0] == 'colors' or texts[0] == '%colors'):
2473 prefix = texts[1]
2474 return SimpleMatcherResult(
2475 completions=[
2476 SimpleCompletion(color, type="param")
2477 for color in theme_table.keys()
2478 if color.startswith(prefix)
2479 ],
2480 suppress=False,
2481 )
2482 return SimpleMatcherResult(
2483 completions=[],
2484 suppress=False,
2485 )
2486
2487 @context_matcher(identifier="IPCompleter.jedi_matcher")
2488 def _jedi_matcher(self, context: CompletionContext) -> _JediMatcherResult:
2489 matches = self._jedi_matches(
2490 cursor_column=context.cursor_position,
2491 cursor_line=context.cursor_line,
2492 text=context.full_text,
2493 )
2494 return {
2495 "completions": matches,
2496 # static analysis should not suppress other matcher
2497 # NOTE: file_matcher is automatically suppressed on attribute completions
2498 "suppress": False,
2499 }
2500
2501 def _jedi_matches(
2502 self, cursor_column: int, cursor_line: int, text: str
2503 ) -> Iterator[_JediCompletionLike]:
2504 """
2505 Return a list of :any:`jedi.api.Completion`\\s object from a ``text`` and
2506 cursor position.
2507
2508 Parameters
2509 ----------
2510 cursor_column : int
2511 column position of the cursor in ``text``, 0-indexed.
2512 cursor_line : int
2513 line position of the cursor in ``text``, 0-indexed
2514 text : str
2515 text to complete
2516
2517 Notes
2518 -----
2519 If ``IPCompleter.debug`` is ``True`` may return a :any:`_FakeJediCompletion`
2520 object containing a string with the Jedi debug information attached.
2521
2522 .. deprecated:: 8.6
2523 You can use :meth:`_jedi_matcher` instead.
2524 """
2525 namespaces = [self.namespace]
2526 if self.global_namespace is not None:
2527 namespaces.append(self.global_namespace)
2528
2529 completion_filter = lambda x:x
2530 offset = cursor_to_position(text, cursor_line, cursor_column)
2531 # filter output if we are completing for object members
2532 if offset:
2533 pre = text[offset-1]
2534 if pre == '.':
2535 if self.omit__names == 2:
2536 completion_filter = lambda c:not c.name.startswith('_')
2537 elif self.omit__names == 1:
2538 completion_filter = lambda c:not (c.name.startswith('__') and c.name.endswith('__'))
2539 elif self.omit__names == 0:
2540 completion_filter = lambda x:x
2541 else:
2542 raise ValueError("Don't understand self.omit__names == {}".format(self.omit__names))
2543
2544 interpreter = jedi.Interpreter(text[:offset], namespaces)
2545 try_jedi = True
2546
2547 try:
2548 # find the first token in the current tree -- if it is a ' or " then we are in a string
2549 completing_string = False
2550 try:
2551 first_child = next(c for c in interpreter._get_module().tree_node.children if hasattr(c, 'value'))
2552 except StopIteration:
2553 pass
2554 else:
2555 # note the value may be ', ", or it may also be ''' or """, or
2556 # in some cases, """what/you/typed..., but all of these are
2557 # strings.
2558 completing_string = len(first_child.value) > 0 and first_child.value[0] in {"'", '"'}
2559
2560 # if we are in a string jedi is likely not the right candidate for
2561 # now. Skip it.
2562 try_jedi = not completing_string
2563 except Exception as e:
2564 # many of things can go wrong, we are using private API just don't crash.
2565 if self.debug:
2566 print("Error detecting if completing a non-finished string :", e, '|')
2567
2568 if not try_jedi:
2569 return iter([])
2570 try:
2571 return filter(completion_filter, interpreter.complete(column=cursor_column, line=cursor_line + 1))
2572 except Exception as e:
2573 if self.debug:
2574 return iter(
2575 [
2576 _FakeJediCompletion(
2577 'Oops Jedi has crashed, please report a bug with the following:\n"""\n%s\ns"""'
2578 % (e)
2579 )
2580 ]
2581 )
2582 else:
2583 return iter([])
2584
2585 class _CompletionContextType(enum.Enum):
2586 ATTRIBUTE = "attribute" # For attribute completion
2587 GLOBAL = "global" # For global completion
2588
2589 def _determine_completion_context(self, line):
2590 """
2591 Determine whether the cursor is in an attribute or global completion context.
2592 """
2593 # Cursor in string/comment → GLOBAL.
2594 is_string, is_in_expression = self._is_in_string_or_comment(line)
2595 if is_string and not is_in_expression:
2596 return self._CompletionContextType.GLOBAL
2597
2598 # If we're in a template string expression, handle specially
2599 if is_string and is_in_expression:
2600 # Extract the expression part - look for the last { that isn't closed
2601 expr_start = line.rfind("{")
2602 if expr_start >= 0:
2603 # We're looking at the expression inside a template string
2604 expr = line[expr_start + 1 :]
2605 # Recursively determine the context of the expression
2606 return self._determine_completion_context(expr)
2607
2608 # Handle plain number literals - should be global context
2609 # Ex: 3. -42.14 but not 3.1.
2610 if re.search(r"(?<!\w)(?<!\d\.)([-+]?\d+\.(\d+)?)(?!\w)$", line):
2611 return self._CompletionContextType.GLOBAL
2612
2613 # Handle all other attribute matches np.ran, d[0].k, (a,b).count
2614 chain_match = re.search(r".*(.+(?<!\s)\.(?:[a-zA-Z]\w*)?)$", line)
2615 if chain_match:
2616 return self._CompletionContextType.ATTRIBUTE
2617
2618 return self._CompletionContextType.GLOBAL
2619
2620 def _is_completing_in_cli_context(self, text: str) -> bool:
2621 """
2622 Determine if we are completing in a CLI alias, line magic, or bang expression context.
2623 """
2624 stripped = text.lstrip()
2625 if stripped.startswith("!") or stripped.startswith("%"):
2626 return True
2627 # Check for CLI aliases
2628 try:
2629 tokens = stripped.split(None, 1)
2630 if not tokens:
2631 return False
2632 first_token = tokens[0]
2633
2634 # Must have arguments after the command for this to apply
2635 if len(tokens) < 2:
2636 return False
2637
2638 # Check if first token is a known alias
2639 if not any(
2640 alias[0] == first_token for alias in self.shell.alias_manager.aliases
2641 ):
2642 return False
2643
2644 try:
2645 if first_token in self.shell.user_ns:
2646 # There's a variable defined, so the alias is overshadowed
2647 return False
2648 except (AttributeError, KeyError):
2649 pass
2650
2651 return True
2652 except Exception:
2653 return False
2654
2655 def _is_in_string_or_comment(self, text):
2656 """
2657 Determine if the cursor is inside a string or comment.
2658 Returns (is_string, is_in_expression) tuple:
2659 - is_string: True if in any kind of string
2660 - is_in_expression: True if inside an f-string/t-string expression
2661 """
2662 in_single_quote = False
2663 in_double_quote = False
2664 in_triple_single = False
2665 in_triple_double = False
2666 in_template_string = False # Covers both f-strings and t-strings
2667 in_expression = False # For expressions in f/t-strings
2668 expression_depth = 0 # Track nested braces in expressions
2669 i = 0
2670
2671 while i < len(text):
2672 # Check for f-string or t-string start
2673 if (
2674 i + 1 < len(text)
2675 and text[i] in ("f", "t")
2676 and (text[i + 1] == '"' or text[i + 1] == "'")
2677 and not (
2678 in_single_quote
2679 or in_double_quote
2680 or in_triple_single
2681 or in_triple_double
2682 )
2683 ):
2684 in_template_string = True
2685 i += 1 # Skip the 'f' or 't'
2686
2687 # Handle triple quotes
2688 if i + 2 < len(text):
2689 if (
2690 text[i : i + 3] == '"""'
2691 and not in_single_quote
2692 and not in_triple_single
2693 ):
2694 in_triple_double = not in_triple_double
2695 if not in_triple_double:
2696 in_template_string = False
2697 i += 3
2698 continue
2699 if (
2700 text[i : i + 3] == "'''"
2701 and not in_double_quote
2702 and not in_triple_double
2703 ):
2704 in_triple_single = not in_triple_single
2705 if not in_triple_single:
2706 in_template_string = False
2707 i += 3
2708 continue
2709
2710 # Handle escapes
2711 if text[i] == "\\" and i + 1 < len(text):
2712 i += 2
2713 continue
2714
2715 # Handle nested braces within f-strings
2716 if in_template_string:
2717 # Special handling for consecutive opening braces
2718 if i + 1 < len(text) and text[i : i + 2] == "{{":
2719 i += 2
2720 continue
2721
2722 # Detect start of an expression
2723 if text[i] == "{":
2724 # Only increment depth and mark as expression if not already in an expression
2725 # or if we're at a top-level nested brace
2726 if not in_expression or (in_expression and expression_depth == 0):
2727 in_expression = True
2728 expression_depth += 1
2729 i += 1
2730 continue
2731
2732 # Detect end of an expression
2733 if text[i] == "}":
2734 expression_depth -= 1
2735 if expression_depth <= 0:
2736 in_expression = False
2737 expression_depth = 0
2738 i += 1
2739 continue
2740
2741 in_triple_quote = in_triple_single or in_triple_double
2742
2743 # Handle quotes - also reset template string when closing quotes are encountered
2744 if text[i] == '"' and not in_single_quote and not in_triple_quote:
2745 in_double_quote = not in_double_quote
2746 if not in_double_quote and not in_triple_quote:
2747 in_template_string = False
2748 elif text[i] == "'" and not in_double_quote and not in_triple_quote:
2749 in_single_quote = not in_single_quote
2750 if not in_single_quote and not in_triple_quote:
2751 in_template_string = False
2752
2753 # Check for comment
2754 if text[i] == "#" and not (
2755 in_single_quote or in_double_quote or in_triple_quote
2756 ):
2757 return True, False
2758
2759 i += 1
2760
2761 is_string = (
2762 in_single_quote or in_double_quote or in_triple_single or in_triple_double
2763 )
2764
2765 # Return tuple (is_string, is_in_expression)
2766 return (
2767 is_string or (in_template_string and not in_expression),
2768 in_expression and expression_depth > 0,
2769 )
2770
2771 @context_matcher()
2772 def python_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2773 """Match attributes or global python names"""
2774 text = context.text_until_cursor
2775 text = self._extract_code(text)
2776 in_cli_context = self._is_completing_in_cli_context(text)
2777 if in_cli_context:
2778 completion_type = self._CompletionContextType.GLOBAL
2779 else:
2780 completion_type = self._determine_completion_context(text)
2781 if completion_type == self._CompletionContextType.ATTRIBUTE:
2782 try:
2783 matches, fragment = self._attr_matches(
2784 text, include_prefix=False, context=context
2785 )
2786 if text.endswith(".") and self.omit__names:
2787 if self.omit__names == 1:
2788 # true if txt is _not_ a __ name, false otherwise:
2789 no__name = lambda txt: re.match(r".*\.__.*?__", txt) is None
2790 else:
2791 # true if txt is _not_ a _ name, false otherwise:
2792 no__name = (
2793 lambda txt: re.match(r"\._.*?", txt[txt.rindex(".") :])
2794 is None
2795 )
2796 matches = filter(no__name, matches)
2797 matches = _convert_matcher_v1_result_to_v2(
2798 matches, type="attribute", fragment=fragment
2799 )
2800 return matches
2801 except NameError:
2802 # catches <undefined attributes>.<tab>
2803 return SimpleMatcherResult(completions=[], suppress=False)
2804 else:
2805 try:
2806 matches = self.global_matches(context.token, context=context)
2807 except TypeError:
2808 matches = self.global_matches(context.token)
2809 # TODO: maybe distinguish between functions, modules and just "variables"
2810 return SimpleMatcherResult(
2811 completions=[
2812 SimpleCompletion(text=match, type="variable") for match in matches
2813 ],
2814 suppress=False,
2815 )
2816
2817 @completion_matcher(api_version=1)
2818 def python_matches(self, text: str) -> Iterable[str]:
2819 """Match attributes or global python names.
2820
2821 .. deprecated:: 8.27
2822 You can use :meth:`python_matcher` instead."""
2823 if "." in text:
2824 try:
2825 matches = self.attr_matches(text)
2826 if text.endswith('.') and self.omit__names:
2827 if self.omit__names == 1:
2828 # true if txt is _not_ a __ name, false otherwise:
2829 no__name = (lambda txt:
2830 re.match(r'.*\.__.*?__',txt) is None)
2831 else:
2832 # true if txt is _not_ a _ name, false otherwise:
2833 no__name = (lambda txt:
2834 re.match(r'\._.*?',txt[txt.rindex('.'):]) is None)
2835 matches = filter(no__name, matches)
2836 except NameError:
2837 # catches <undefined attributes>.<tab>
2838 matches = []
2839 else:
2840 matches = self.global_matches(text)
2841 return matches
2842
2843 def _default_arguments_from_docstring(self, doc):
2844 """Parse the first line of docstring for call signature.
2845
2846 Docstring should be of the form 'min(iterable[, key=func])\n'.
2847 It can also parse cython docstring of the form
2848 'Minuit.migrad(self, int ncall=10000, resume=True, int nsplit=1)'.
2849 """
2850 if doc is None:
2851 return []
2852
2853 #care only the firstline
2854 line = doc.lstrip().splitlines()[0]
2855
2856 #p = re.compile(r'^[\w|\s.]+\(([^)]*)\).*')
2857 #'min(iterable[, key=func])\n' -> 'iterable[, key=func]'
2858 sig = self.docstring_sig_re.search(line)
2859 if sig is None:
2860 return []
2861 # iterable[, key=func]' -> ['iterable[' ,' key=func]']
2862 sig = sig.groups()[0].split(',')
2863 ret = []
2864 for s in sig:
2865 #re.compile(r'[\s|\[]*(\w+)(?:\s*=\s*.*)')
2866 ret += self.docstring_kwd_re.findall(s)
2867 return ret
2868
2869 def _default_arguments(self, obj):
2870 """Return the list of default arguments of obj if it is callable,
2871 or empty list otherwise."""
2872 call_obj = obj
2873 ret = []
2874 if inspect.isbuiltin(obj):
2875 pass
2876 elif not (inspect.isfunction(obj) or inspect.ismethod(obj)):
2877 if inspect.isclass(obj):
2878 #for cython embedsignature=True the constructor docstring
2879 #belongs to the object itself not __init__
2880 ret += self._default_arguments_from_docstring(
2881 getattr(obj, '__doc__', ''))
2882 # for classes, check for __init__,__new__
2883 call_obj = (getattr(obj, '__init__', None) or
2884 getattr(obj, '__new__', None))
2885 # for all others, check if they are __call__able
2886 elif hasattr(obj, '__call__'):
2887 call_obj = obj.__call__
2888 ret += self._default_arguments_from_docstring(
2889 getattr(call_obj, '__doc__', ''))
2890
2891 _keeps = (inspect.Parameter.KEYWORD_ONLY,
2892 inspect.Parameter.POSITIONAL_OR_KEYWORD)
2893
2894 try:
2895 sig = inspect.signature(obj)
2896 ret.extend(k for k, v in sig.parameters.items() if
2897 v.kind in _keeps)
2898 except ValueError:
2899 pass
2900
2901 return list(set(ret))
2902
2903 @context_matcher()
2904 def python_func_kw_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
2905 """Match named parameters (kwargs) of the last open function."""
2906 matches = self.python_func_kw_matches(context.token)
2907 return _convert_matcher_v1_result_to_v2_no_no(matches, type="param")
2908
2909 def python_func_kw_matches(self, text):
2910 """Match named parameters (kwargs) of the last open function.
2911
2912 .. deprecated:: 8.6
2913 You can use :meth:`python_func_kw_matcher` instead.
2914 """
2915
2916 if "." in text: # a parameter cannot be dotted
2917 return []
2918 try: regexp = self.__funcParamsRegex
2919 except AttributeError:
2920 regexp = self.__funcParamsRegex = re.compile(r'''
2921 '.*?(?<!\\)' | # single quoted strings or
2922 ".*?(?<!\\)" | # double quoted strings or
2923 \w+ | # identifier
2924 \S # other characters
2925 ''', re.VERBOSE | re.DOTALL)
2926 # 1. find the nearest identifier that comes before an unclosed
2927 # parenthesis before the cursor
2928 # e.g. for "foo (1+bar(x), pa<cursor>,a=1)", the candidate is "foo"
2929 tokens = regexp.findall(self.text_until_cursor)
2930 iterTokens = reversed(tokens)
2931 openPar = 0
2932
2933 for token in iterTokens:
2934 if token == ')':
2935 openPar -= 1
2936 elif token == '(':
2937 openPar += 1
2938 if openPar > 0:
2939 # found the last unclosed parenthesis
2940 break
2941 else:
2942 return []
2943 # 2. Concatenate dotted names ("foo.bar" for "foo.bar(x, pa" )
2944 ids = []
2945 isId = re.compile(r'\w+$').match
2946
2947 while True:
2948 try:
2949 ids.append(next(iterTokens))
2950 if not isId(ids[-1]):
2951 ids.pop()
2952 break
2953 if not next(iterTokens) == '.':
2954 break
2955 except StopIteration:
2956 break
2957
2958 # Find all named arguments already assigned to, as to avoid suggesting
2959 # them again
2960 usedNamedArgs = set()
2961 par_level = -1
2962 for token, next_token in zip(tokens, tokens[1:]):
2963 if token == '(':
2964 par_level += 1
2965 elif token == ')':
2966 par_level -= 1
2967
2968 if par_level != 0:
2969 continue
2970
2971 if next_token != '=':
2972 continue
2973
2974 usedNamedArgs.add(token)
2975
2976 argMatches = []
2977 try:
2978 callableObj = '.'.join(ids[::-1])
2979 namedArgs = self._default_arguments(eval(callableObj,
2980 self.namespace))
2981
2982 # Remove used named arguments from the list, no need to show twice
2983 for namedArg in set(namedArgs) - usedNamedArgs:
2984 if namedArg.startswith(text):
2985 argMatches.append("%s=" %namedArg)
2986 except:
2987 pass
2988
2989 return argMatches
2990
2991 @staticmethod
2992 def _get_keys(obj: Any) -> list[Any]:
2993 # Objects can define their own completions by defining an
2994 # _ipy_key_completions_() method.
2995 method = get_real_method(obj, '_ipython_key_completions_')
2996 if method is not None:
2997 return method()
2998
2999 # Special case some common in-memory dict-like types
3000 if isinstance(obj, dict) or _safe_isinstance(obj, "pandas", "DataFrame"):
3001 try:
3002 return list(obj.keys())
3003 except Exception:
3004 return []
3005 elif _safe_isinstance(obj, "pandas", "core", "indexing", "_LocIndexer"):
3006 try:
3007 return list(obj.obj.keys())
3008 except Exception:
3009 return []
3010 elif _safe_isinstance(obj, 'numpy', 'ndarray') or\
3011 _safe_isinstance(obj, 'numpy', 'void'):
3012 return obj.dtype.names or []
3013 return []
3014
3015 @context_matcher()
3016 def dict_key_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
3017 """Match string keys in a dictionary, after e.g. ``foo[``."""
3018 matches = self.dict_key_matches(context.token)
3019 return _convert_matcher_v1_result_to_v2(
3020 matches, type="dict key", suppress_if_matches=True
3021 )
3022
3023 def dict_key_matches(self, text: str) -> list[str]:
3024 """Match string keys in a dictionary, after e.g. ``foo[``.
3025
3026 .. deprecated:: 8.6
3027 You can use :meth:`dict_key_matcher` instead.
3028 """
3029
3030 # Short-circuit on closed dictionary (regular expression would
3031 # not match anyway, but would take quite a while).
3032 if self.text_until_cursor.strip().endswith("]"):
3033 return []
3034
3035 match = DICT_MATCHER_REGEX.search(self.text_until_cursor)
3036
3037 if match is None:
3038 return []
3039
3040 expr, prior_tuple_keys, key_prefix = match.groups()
3041
3042 obj = self._evaluate_expr(expr)
3043
3044 if obj is not_found:
3045 return []
3046
3047 keys = self._get_keys(obj)
3048 if not keys:
3049 return keys
3050
3051 tuple_prefix = guarded_eval(
3052 prior_tuple_keys,
3053 EvaluationContext(
3054 globals=self.global_namespace,
3055 locals=self.namespace,
3056 evaluation=self.evaluation, # type: ignore
3057 in_subscript=True,
3058 auto_import=self._auto_import,
3059 policy_overrides=self.policy_overrides,
3060 ),
3061 )
3062
3063 closing_quote, token_offset, matches = match_dict_keys(
3064 keys, key_prefix, self.splitter.delims, extra_prefix=tuple_prefix
3065 )
3066 if not matches:
3067 return []
3068
3069 # get the cursor position of
3070 # - the text being completed
3071 # - the start of the key text
3072 # - the start of the completion
3073 text_start = len(self.text_until_cursor) - len(text)
3074 if key_prefix:
3075 key_start = match.start(3)
3076 completion_start = key_start + token_offset
3077 else:
3078 key_start = completion_start = match.end()
3079
3080 # grab the leading prefix, to make sure all completions start with `text`
3081 if text_start > key_start:
3082 leading = ''
3083 else:
3084 leading = text[text_start:completion_start]
3085
3086 # append closing quote and bracket as appropriate
3087 # this is *not* appropriate if the opening quote or bracket is outside
3088 # the text given to this method, e.g. `d["""a\nt
3089 can_close_quote = False
3090 can_close_bracket = False
3091
3092 continuation = self.line_buffer[len(self.text_until_cursor) :].strip()
3093
3094 if continuation.startswith(closing_quote):
3095 # do not close if already closed, e.g. `d['a<tab>'`
3096 continuation = continuation[len(closing_quote) :]
3097 else:
3098 can_close_quote = True
3099
3100 continuation = continuation.strip()
3101
3102 # e.g. `pandas.DataFrame` has different tuple indexer behaviour,
3103 # handling it is out of scope, so let's avoid appending suffixes.
3104 has_known_tuple_handling = isinstance(obj, dict)
3105
3106 can_close_bracket = (
3107 not continuation.startswith("]") and self.auto_close_dict_keys
3108 )
3109 can_close_tuple_item = (
3110 not continuation.startswith(",")
3111 and has_known_tuple_handling
3112 and self.auto_close_dict_keys
3113 )
3114 can_close_quote = can_close_quote and self.auto_close_dict_keys
3115
3116 # fast path if closing quote should be appended but not suffix is allowed
3117 if not can_close_quote and not can_close_bracket and closing_quote:
3118 return [leading + k for k in matches]
3119
3120 results = []
3121
3122 end_of_tuple_or_item = _DictKeyState.END_OF_TUPLE | _DictKeyState.END_OF_ITEM
3123
3124 for k, state_flag in matches.items():
3125 result = leading + k
3126 if can_close_quote and closing_quote:
3127 result += closing_quote
3128
3129 if state_flag == end_of_tuple_or_item:
3130 # We do not know which suffix to add,
3131 # e.g. both tuple item and string
3132 # match this item.
3133 pass
3134
3135 if state_flag in end_of_tuple_or_item and can_close_bracket:
3136 result += "]"
3137 if state_flag == _DictKeyState.IN_TUPLE and can_close_tuple_item:
3138 result += ", "
3139 results.append(result)
3140 return results
3141
3142 @context_matcher()
3143 def unicode_name_matcher(self, context: CompletionContext) -> SimpleMatcherResult:
3144 """Match Latex-like syntax for unicode characters base
3145 on the name of the character.
3146
3147 This does ``\\GREEK SMALL LETTER ETA`` -> ``η``
3148
3149 Works only on valid python 3 identifier, or on combining characters that
3150 will combine to form a valid identifier.
3151 """
3152
3153 text = context.text_until_cursor
3154
3155 slashpos = text.rfind('\\')
3156 if slashpos > -1:
3157 s = text[slashpos+1:]
3158 try :
3159 unic = unicodedata.lookup(s)
3160 # allow combining chars
3161 if ('a'+unic).isidentifier():
3162 return {
3163 "completions": [SimpleCompletion(text=unic, type="unicode")],
3164 "suppress": True,
3165 "matched_fragment": "\\" + s,
3166 }
3167 except KeyError:
3168 pass
3169 return {
3170 "completions": [],
3171 "suppress": False,
3172 }
3173
3174 @context_matcher()
3175 def latex_name_matcher(self, context: CompletionContext):
3176 """Match Latex syntax for unicode characters.
3177
3178 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
3179 """
3180 fragment, matches = self.latex_matches(context.text_until_cursor)
3181 return _convert_matcher_v1_result_to_v2(
3182 matches, type="latex", fragment=fragment, suppress_if_matches=True
3183 )
3184
3185 def latex_matches(self, text: str) -> tuple[str, Sequence[str]]:
3186 """Match Latex syntax for unicode characters.
3187
3188 This does both ``\\alp`` -> ``\\alpha`` and ``\\alpha`` -> ``α``
3189
3190 .. deprecated:: 8.6
3191 You can use :meth:`latex_name_matcher` instead.
3192 """
3193 slashpos = text.rfind('\\')
3194 if slashpos > -1:
3195 s = text[slashpos:]
3196 if s in latex_symbols:
3197 # Try to complete a full latex symbol to unicode
3198 # \\alpha -> α
3199 return s, [latex_symbols[s]]
3200 else:
3201 # If a user has partially typed a latex symbol, give them
3202 # a full list of options \al -> [\aleph, \alpha]
3203 matches = [k for k in latex_symbols if k.startswith(s)]
3204 if matches:
3205 return s, matches
3206 return '', ()
3207
3208 @context_matcher()
3209 def custom_completer_matcher(self, context):
3210 """Dispatch custom completer.
3211
3212 If a match is found, suppresses all other matchers except for Jedi.
3213 """
3214 matches = self.dispatch_custom_completer(context.token) or []
3215 result = _convert_matcher_v1_result_to_v2(
3216 matches, type=_UNKNOWN_TYPE, suppress_if_matches=True
3217 )
3218 result["ordered"] = True
3219 result["do_not_suppress"] = {_get_matcher_id(self._jedi_matcher)}
3220 return result
3221
3222 def dispatch_custom_completer(self, text):
3223 """
3224 .. deprecated:: 8.6
3225 You can use :meth:`custom_completer_matcher` instead.
3226 """
3227 if not self.custom_completers:
3228 return
3229
3230 line = self.line_buffer
3231 if not line.strip():
3232 return None
3233
3234 # Create a little structure to pass all the relevant information about
3235 # the current completion to any custom completer.
3236 event = SimpleNamespace()
3237 event.line = line
3238 event.symbol = text
3239 cmd = line.split(None,1)[0]
3240 event.command = cmd
3241 event.text_until_cursor = self.text_until_cursor
3242
3243 # for foo etc, try also to find completer for %foo
3244 if not cmd.startswith(self.magic_escape):
3245 try_magic = self.custom_completers.s_matches(
3246 self.magic_escape + cmd)
3247 else:
3248 try_magic = []
3249
3250 for c in itertools.chain(self.custom_completers.s_matches(cmd),
3251 try_magic,
3252 self.custom_completers.flat_matches(self.text_until_cursor)):
3253 try:
3254 res = c(event)
3255 if res:
3256 # first, try case sensitive match
3257 withcase = [r for r in res if r.startswith(text)]
3258 if withcase:
3259 return withcase
3260 # if none, then case insensitive ones are ok too
3261 text_low = text.lower()
3262 return [r for r in res if r.lower().startswith(text_low)]
3263 except TryNext:
3264 pass
3265 except KeyboardInterrupt:
3266 """
3267 If custom completer take too long,
3268 let keyboard interrupt abort and return nothing.
3269 """
3270 break
3271
3272 return None
3273
3274 def completions(self, text: str, offset: int)->Iterator[Completion]:
3275 """
3276 Returns an iterator over the possible completions
3277
3278 .. warning::
3279
3280 Unstable
3281
3282 This function is unstable, API may change without warning.
3283 It will also raise unless use in proper context manager.
3284
3285 Parameters
3286 ----------
3287 text : str
3288 Full text of the current input, multi line string.
3289 offset : int
3290 Integer representing the position of the cursor in ``text``. Offset
3291 is 0-based indexed.
3292
3293 Yields
3294 ------
3295 Completion
3296
3297 Notes
3298 -----
3299 The cursor on a text can either be seen as being "in between"
3300 characters or "On" a character depending on the interface visible to
3301 the user. For consistency the cursor being on "in between" characters X
3302 and Y is equivalent to the cursor being "on" character Y, that is to say
3303 the character the cursor is on is considered as being after the cursor.
3304
3305 Combining characters may span more that one position in the
3306 text.
3307
3308 .. note::
3309
3310 If ``IPCompleter.debug`` is :any:`True` will yield a ``--jedi/ipython--``
3311 fake Completion token to distinguish completion returned by Jedi
3312 and usual IPython completion.
3313
3314 .. note::
3315
3316 Completions are not completely deduplicated yet. If identical
3317 completions are coming from different sources this function does not
3318 ensure that each completion object will only be present once.
3319 """
3320 warnings.warn("_complete is a provisional API (as of IPython 6.0). "
3321 "It may change without warnings. "
3322 "Use in corresponding context manager.",
3323 category=ProvisionalCompleterWarning, stacklevel=2)
3324
3325 seen = set()
3326 profiler:Optional[cProfile.Profile]
3327 try:
3328 if self.profile_completions:
3329 import cProfile
3330 profiler = cProfile.Profile()
3331 profiler.enable()
3332 else:
3333 profiler = None
3334
3335 for c in self._completions(text, offset, _timeout=self.jedi_compute_type_timeout/1000):
3336 if c and (c in seen):
3337 continue
3338 yield c
3339 seen.add(c)
3340 except KeyboardInterrupt:
3341 """if completions take too long and users send keyboard interrupt,
3342 do not crash and return ASAP. """
3343 pass
3344 finally:
3345 if profiler is not None:
3346 profiler.disable()
3347 ensure_dir_exists(self.profiler_output_dir)
3348 output_path = os.path.join(self.profiler_output_dir, str(uuid.uuid4()))
3349 print("Writing profiler output to", output_path)
3350 profiler.dump_stats(output_path)
3351
3352 def _completions(self, full_text: str, offset: int, *, _timeout) -> Iterator[Completion]:
3353 """
3354 Core completion module.Same signature as :any:`completions`, with the
3355 extra `timeout` parameter (in seconds).
3356
3357 Computing jedi's completion ``.type`` can be quite expensive (it is a
3358 lazy property) and can require some warm-up, more warm up than just
3359 computing the ``name`` of a completion. The warm-up can be :
3360
3361 - Long warm-up the first time a module is encountered after
3362 install/update: actually build parse/inference tree.
3363
3364 - first time the module is encountered in a session: load tree from
3365 disk.
3366
3367 We don't want to block completions for tens of seconds so we give the
3368 completer a "budget" of ``_timeout`` seconds per invocation to compute
3369 completions types, the completions that have not yet been computed will
3370 be marked as "unknown" an will have a chance to be computed next round
3371 are things get cached.
3372
3373 Keep in mind that Jedi is not the only thing treating the completion so
3374 keep the timeout short-ish as if we take more than 0.3 second we still
3375 have lots of processing to do.
3376
3377 """
3378 deadline = time.monotonic() + _timeout
3379
3380 before = full_text[:offset]
3381 cursor_line, cursor_column = position_to_cursor(full_text, offset)
3382
3383 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3384
3385 def is_non_jedi_result(
3386 result: MatcherResult, identifier: str
3387 ) -> TypeGuard[SimpleMatcherResult]:
3388 return identifier != jedi_matcher_id
3389
3390 results = self._complete(
3391 full_text=full_text, cursor_line=cursor_line, cursor_pos=cursor_column
3392 )
3393
3394 non_jedi_results: dict[str, SimpleMatcherResult] = {
3395 identifier: result
3396 for identifier, result in results.items()
3397 if is_non_jedi_result(result, identifier)
3398 }
3399
3400 jedi_matches = (
3401 cast(_JediMatcherResult, results[jedi_matcher_id])["completions"]
3402 if jedi_matcher_id in results
3403 else ()
3404 )
3405
3406 iter_jm = iter(jedi_matches)
3407 if _timeout:
3408 for jm in iter_jm:
3409 try:
3410 type_ = jm.type
3411 except Exception:
3412 if self.debug:
3413 print("Error in Jedi getting type of ", jm)
3414 type_ = None
3415 delta = len(jm.name_with_symbols) - len(jm.complete)
3416 if type_ == 'function':
3417 signature = _make_signature(jm)
3418 else:
3419 signature = ''
3420 yield Completion(start=offset - delta,
3421 end=offset,
3422 text=jm.name_with_symbols,
3423 type=type_,
3424 signature=signature,
3425 _origin='jedi')
3426
3427 if time.monotonic() > deadline:
3428 break
3429
3430 for jm in iter_jm:
3431 delta = len(jm.name_with_symbols) - len(jm.complete)
3432 yield Completion(
3433 start=offset - delta,
3434 end=offset,
3435 text=jm.name_with_symbols,
3436 type=_UNKNOWN_TYPE, # don't compute type for speed
3437 _origin="jedi",
3438 signature="",
3439 )
3440
3441 # TODO:
3442 # Suppress this, right now just for debug.
3443 if jedi_matches and non_jedi_results and self.debug:
3444 some_start_offset = before.rfind(
3445 next(iter(non_jedi_results.values()))["matched_fragment"]
3446 )
3447 yield Completion(
3448 start=some_start_offset,
3449 end=offset,
3450 text="--jedi/ipython--",
3451 _origin="debug",
3452 type="none",
3453 signature="",
3454 )
3455
3456 ordered: list[Completion] = []
3457 sortable: list[Completion] = []
3458
3459 for origin, result in non_jedi_results.items():
3460 matched_text = result["matched_fragment"]
3461 start_offset = before.rfind(matched_text)
3462 is_ordered = result.get("ordered", False)
3463 container = ordered if is_ordered else sortable
3464
3465 # I'm unsure if this is always true, so let's assert and see if it
3466 # crash
3467 assert before.endswith(matched_text)
3468
3469 for simple_completion in result["completions"]:
3470 completion = Completion(
3471 start=start_offset,
3472 end=offset,
3473 text=simple_completion.text,
3474 _origin=origin,
3475 signature="",
3476 type=simple_completion.type or _UNKNOWN_TYPE,
3477 )
3478 container.append(completion)
3479
3480 yield from list(self._deduplicate(ordered + self._sort(sortable)))[
3481 :MATCHES_LIMIT
3482 ]
3483
3484 def complete(
3485 self, text=None, line_buffer=None, cursor_pos=None
3486 ) -> tuple[str, Sequence[str]]:
3487 """Find completions for the given text and line context.
3488
3489 Note that both the text and the line_buffer are optional, but at least
3490 one of them must be given.
3491
3492 Parameters
3493 ----------
3494 text : string, optional
3495 Text to perform the completion on. If not given, the line buffer
3496 is split using the instance's CompletionSplitter object.
3497 line_buffer : string, optional
3498 If not given, the completer attempts to obtain the current line
3499 buffer via readline. This keyword allows clients which are
3500 requesting for text completions in non-readline contexts to inform
3501 the completer of the entire text.
3502 cursor_pos : int, optional
3503 Index of the cursor in the full line buffer. Should be provided by
3504 remote frontends where kernel has no access to frontend state.
3505
3506 Returns
3507 -------
3508 Tuple of two items:
3509 text : str
3510 Text that was actually used in the completion.
3511 matches : list
3512 A list of completion matches.
3513
3514 Notes
3515 -----
3516 This API is likely to be deprecated and replaced by
3517 :any:`IPCompleter.completions` in the future.
3518
3519 """
3520 warnings.warn('`Completer.complete` is pending deprecation since '
3521 'IPython 6.0 and will be replaced by `Completer.completions`.',
3522 PendingDeprecationWarning)
3523 # potential todo, FOLD the 3rd throw away argument of _complete
3524 # into the first 2 one.
3525 # TODO: Q: does the above refer to jedi completions (i.e. 0-indexed?)
3526 # TODO: should we deprecate now, or does it stay?
3527
3528 results = self._complete(
3529 line_buffer=line_buffer, cursor_pos=cursor_pos, text=text, cursor_line=0
3530 )
3531
3532 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3533
3534 return self._arrange_and_extract(
3535 results,
3536 # TODO: can we confirm that excluding Jedi here was a deliberate choice in previous version?
3537 skip_matchers={jedi_matcher_id},
3538 # this API does not support different start/end positions (fragments of token).
3539 abort_if_offset_changes=True,
3540 )
3541
3542 def _arrange_and_extract(
3543 self,
3544 results: dict[str, MatcherResult],
3545 skip_matchers: set[str],
3546 abort_if_offset_changes: bool,
3547 ):
3548 sortable: list[AnyMatcherCompletion] = []
3549 ordered: list[AnyMatcherCompletion] = []
3550 most_recent_fragment = None
3551 for identifier, result in results.items():
3552 if identifier in skip_matchers:
3553 continue
3554 if not result["completions"]:
3555 continue
3556 if not most_recent_fragment:
3557 most_recent_fragment = result["matched_fragment"]
3558 if (
3559 abort_if_offset_changes
3560 and result["matched_fragment"] != most_recent_fragment
3561 ):
3562 break
3563 if result.get("ordered", False):
3564 ordered.extend(result["completions"])
3565 else:
3566 sortable.extend(result["completions"])
3567
3568 if not most_recent_fragment:
3569 most_recent_fragment = "" # to satisfy typechecker (and just in case)
3570
3571 return most_recent_fragment, [
3572 m.text for m in self._deduplicate(ordered + self._sort(sortable))
3573 ]
3574
3575 def _complete(self, *, cursor_line, cursor_pos, line_buffer=None, text=None,
3576 full_text=None) -> _CompleteResult:
3577 """
3578 Like complete but can also returns raw jedi completions as well as the
3579 origin of the completion text. This could (and should) be made much
3580 cleaner but that will be simpler once we drop the old (and stateful)
3581 :any:`complete` API.
3582
3583 With current provisional API, cursor_pos act both (depending on the
3584 caller) as the offset in the ``text`` or ``line_buffer``, or as the
3585 ``column`` when passing multiline strings this could/should be renamed
3586 but would add extra noise.
3587
3588 Parameters
3589 ----------
3590 cursor_line
3591 Index of the line the cursor is on. 0 indexed.
3592 cursor_pos
3593 Position of the cursor in the current line/line_buffer/text. 0
3594 indexed.
3595 line_buffer : optional, str
3596 The current line the cursor is in, this is mostly due to legacy
3597 reason that readline could only give a us the single current line.
3598 Prefer `full_text`.
3599 text : str
3600 The current "token" the cursor is in, mostly also for historical
3601 reasons. as the completer would trigger only after the current line
3602 was parsed.
3603 full_text : str
3604 Full text of the current cell.
3605
3606 Returns
3607 -------
3608 An ordered dictionary where keys are identifiers of completion
3609 matchers and values are ``MatcherResult``s.
3610 """
3611
3612 # if the cursor position isn't given, the only sane assumption we can
3613 # make is that it's at the end of the line (the common case)
3614 if cursor_pos is None:
3615 cursor_pos = len(line_buffer) if text is None else len(text)
3616
3617 if self.use_main_ns:
3618 self.namespace = __main__.__dict__
3619
3620 # if text is either None or an empty string, rely on the line buffer
3621 if (not line_buffer) and full_text:
3622 line_buffer = full_text.split('\n')[cursor_line]
3623 if not text: # issue #11508: check line_buffer before calling split_line
3624 text = (
3625 self.splitter.split_line(line_buffer, cursor_pos) if line_buffer else ""
3626 )
3627
3628 # If no line buffer is given, assume the input text is all there was
3629 if line_buffer is None:
3630 line_buffer = text
3631
3632 # deprecated - do not use `line_buffer` in new code.
3633 self.line_buffer = line_buffer
3634 self.text_until_cursor = self.line_buffer[:cursor_pos]
3635
3636 if not full_text:
3637 full_text = line_buffer
3638
3639 context = CompletionContext(
3640 full_text=full_text,
3641 cursor_position=cursor_pos,
3642 cursor_line=cursor_line,
3643 token=self._extract_code(text),
3644 limit=MATCHES_LIMIT,
3645 )
3646
3647 # Start with a clean slate of completions
3648 results: dict[str, MatcherResult] = {}
3649
3650 jedi_matcher_id = _get_matcher_id(self._jedi_matcher)
3651
3652 suppressed_matchers: set[str] = set()
3653
3654 matchers = {
3655 _get_matcher_id(matcher): matcher
3656 for matcher in sorted(
3657 self.matchers, key=_get_matcher_priority, reverse=True
3658 )
3659 }
3660
3661 for matcher_id, matcher in matchers.items():
3662 matcher_id = _get_matcher_id(matcher)
3663
3664 if matcher_id in self.disable_matchers:
3665 continue
3666
3667 if matcher_id in results:
3668 warnings.warn(f"Duplicate matcher ID: {matcher_id}.")
3669
3670 if matcher_id in suppressed_matchers:
3671 continue
3672
3673 result: MatcherResult
3674 try:
3675 if _is_matcher_v1(matcher):
3676 result = _convert_matcher_v1_result_to_v2_no_no(
3677 matcher(text), type=_UNKNOWN_TYPE
3678 )
3679 elif _is_matcher_v2(matcher):
3680 result = matcher(context)
3681 else:
3682 api_version = _get_matcher_api_version(matcher)
3683 raise ValueError(f"Unsupported API version {api_version}")
3684 except BaseException:
3685 # Show the ugly traceback if the matcher causes an
3686 # exception, but do NOT crash the kernel!
3687 sys.excepthook(*sys.exc_info())
3688 continue
3689
3690 # set default value for matched fragment if suffix was not selected.
3691 result["matched_fragment"] = result.get("matched_fragment", context.token)
3692
3693 if not suppressed_matchers:
3694 suppression_recommended: Union[bool, set[str]] = result.get(
3695 "suppress", False
3696 )
3697
3698 suppression_config = (
3699 self.suppress_competing_matchers.get(matcher_id, None)
3700 if isinstance(self.suppress_competing_matchers, dict)
3701 else self.suppress_competing_matchers
3702 )
3703 should_suppress = (
3704 (suppression_config is True)
3705 or (suppression_recommended and (suppression_config is not False))
3706 ) and has_any_completions(result)
3707
3708 if should_suppress:
3709 suppression_exceptions: set[str] = result.get(
3710 "do_not_suppress", set()
3711 )
3712 if isinstance(suppression_recommended, Iterable):
3713 to_suppress = set(suppression_recommended)
3714 else:
3715 to_suppress = set(matchers)
3716 suppressed_matchers = to_suppress - suppression_exceptions
3717
3718 new_results = {}
3719 for previous_matcher_id, previous_result in results.items():
3720 if previous_matcher_id not in suppressed_matchers:
3721 new_results[previous_matcher_id] = previous_result
3722 results = new_results
3723
3724 results[matcher_id] = result
3725
3726 _, matches = self._arrange_and_extract(
3727 results,
3728 # TODO Jedi completions non included in legacy stateful API; was this deliberate or omission?
3729 # if it was omission, we can remove the filtering step, otherwise remove this comment.
3730 skip_matchers={jedi_matcher_id},
3731 abort_if_offset_changes=False,
3732 )
3733
3734 # populate legacy stateful API
3735 self.matches = matches
3736
3737 return results
3738
3739 @staticmethod
3740 def _deduplicate(
3741 matches: Sequence[AnyCompletion],
3742 ) -> Iterable[AnyCompletion]:
3743 filtered_matches: dict[str, AnyCompletion] = {}
3744 for match in matches:
3745 text = match.text
3746 if (
3747 text not in filtered_matches
3748 or filtered_matches[text].type == _UNKNOWN_TYPE
3749 ):
3750 filtered_matches[text] = match
3751
3752 return filtered_matches.values()
3753
3754 @staticmethod
3755 def _sort(matches: Sequence[AnyCompletion]):
3756 return sorted(matches, key=lambda x: completions_sorting_key(x.text))
3757
3758 @context_matcher()
3759 def fwd_unicode_matcher(self, context: CompletionContext):
3760 """Same as :any:`fwd_unicode_match`, but adopted to new Matcher API."""
3761 # TODO: use `context.limit` to terminate early once we matched the maximum
3762 # number that will be used downstream; can be added as an optional to
3763 # `fwd_unicode_match(text: str, limit: int = None)` or we could re-implement here.
3764 fragment, matches = self.fwd_unicode_match(context.text_until_cursor)
3765 return _convert_matcher_v1_result_to_v2(
3766 matches, type="unicode", fragment=fragment, suppress_if_matches=True
3767 )
3768
3769 def fwd_unicode_match(self, text: str) -> tuple[str, Sequence[str]]:
3770 """
3771 Forward match a string starting with a backslash with a list of
3772 potential Unicode completions.
3773
3774 Will compute list of Unicode character names on first call and cache it.
3775
3776 .. deprecated:: 8.6
3777 You can use :meth:`fwd_unicode_matcher` instead.
3778
3779 Returns
3780 -------
3781 At tuple with:
3782 - matched text (empty if no matches)
3783 - list of potential completions, empty tuple otherwise)
3784 """
3785 # TODO: self.unicode_names is here a list we traverse each time with ~100k elements.
3786 # We could do a faster match using a Trie.
3787
3788 # Using pygtrie the following seem to work:
3789
3790 # s = PrefixSet()
3791
3792 # for c in range(0,0x10FFFF + 1):
3793 # try:
3794 # s.add(unicodedata.name(chr(c)))
3795 # except ValueError:
3796 # pass
3797 # [''.join(k) for k in s.iter(prefix)]
3798
3799 # But need to be timed and adds an extra dependency.
3800
3801 slashpos = text.rfind('\\')
3802 # if text starts with slash
3803 if slashpos > -1:
3804 # PERF: It's important that we don't access self._unicode_names
3805 # until we're inside this if-block. _unicode_names is lazily
3806 # initialized, and it takes a user-noticeable amount of time to
3807 # initialize it, so we don't want to initialize it unless we're
3808 # actually going to use it.
3809 s = text[slashpos + 1 :]
3810 sup = s.upper()
3811 candidates = [x for x in self.unicode_names if x.startswith(sup)]
3812 if candidates:
3813 return s, candidates
3814 candidates = [x for x in self.unicode_names if sup in x]
3815 if candidates:
3816 return s, candidates
3817 splitsup = sup.split(" ")
3818 candidates = [
3819 x for x in self.unicode_names if all(u in x for u in splitsup)
3820 ]
3821 if candidates:
3822 return s, candidates
3823
3824 return "", ()
3825
3826 # if text does not start with slash
3827 else:
3828 return '', ()
3829
3830 @property
3831 def unicode_names(self) -> list[str]:
3832 """List of names of unicode code points that can be completed.
3833
3834 The list is lazily initialized on first access.
3835 """
3836 if self._unicode_names is None:
3837 names = []
3838 for c in range(0,0x10FFFF + 1):
3839 try:
3840 names.append(unicodedata.name(chr(c)))
3841 except ValueError:
3842 pass
3843 self._unicode_names = _unicode_name_compute(_UNICODE_RANGES)
3844
3845 return self._unicode_names
3846
3847
3848def _unicode_name_compute(ranges: list[tuple[int, int]]) -> list[str]:
3849 names = []
3850 for start,stop in ranges:
3851 for c in range(start, stop) :
3852 try:
3853 names.append(unicodedata.name(chr(c)))
3854 except ValueError:
3855 pass
3856 return names