1from __future__ import annotations
2
3_shared_docs: dict[str, str] = {}
4
5_shared_docs[
6 "aggregate"
7] = """
8Aggregate using one or more operations over the specified axis.
9
10Parameters
11----------
12func : function, str, list or dict
13 Function to use for aggregating the data. If a function, must either
14 work when passed a {klass} or when passed to {klass}.apply.
15
16 Accepted combinations are:
17
18 - function
19 - string function name
20 - list of functions and/or function names, e.g. ``[np.sum, 'mean']``
21 - dict of axis labels -> functions, function names or list of such.
22{axis}
23*args
24 Positional arguments to pass to `func`.
25**kwargs
26 Keyword arguments to pass to `func`.
27
28Returns
29-------
30scalar, Series or DataFrame
31
32 The return can be:
33
34 * scalar : when Series.agg is called with single function
35 * Series : when DataFrame.agg is called with a single function
36 * DataFrame : when DataFrame.agg is called with several functions
37
38 Return scalar, Series or DataFrame.
39{see_also}
40Notes
41-----
42`agg` is an alias for `aggregate`. Use the alias.
43
44Functions that mutate the passed object can produce unexpected
45behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
46for more details.
47
48A passed user-defined-function will be passed a Series for evaluation.
49{examples}"""
50
51_shared_docs[
52 "compare"
53] = """
54Compare to another {klass} and show the differences.
55
56.. versionadded:: 1.1.0
57
58Parameters
59----------
60other : {klass}
61 Object to compare with.
62
63align_axis : {{0 or 'index', 1 or 'columns'}}, default 1
64 Determine which axis to align the comparison on.
65
66 * 0, or 'index' : Resulting differences are stacked vertically
67 with rows drawn alternately from self and other.
68 * 1, or 'columns' : Resulting differences are aligned horizontally
69 with columns drawn alternately from self and other.
70
71keep_shape : bool, default False
72 If true, all rows and columns are kept.
73 Otherwise, only the ones with different values are kept.
74
75keep_equal : bool, default False
76 If true, the result keeps values that are equal.
77 Otherwise, equal values are shown as NaNs.
78
79result_names : tuple, default ('self', 'other')
80 Set the dataframes names in the comparison.
81
82 .. versionadded:: 1.5.0
83"""
84
85_shared_docs[
86 "groupby"
87] = """
88Group %(klass)s using a mapper or by a Series of columns.
89
90A groupby operation involves some combination of splitting the
91object, applying a function, and combining the results. This can be
92used to group large amounts of data and compute operations on these
93groups.
94
95Parameters
96----------
97by : mapping, function, label, pd.Grouper or list of such
98 Used to determine the groups for the groupby.
99 If ``by`` is a function, it's called on each value of the object's
100 index. If a dict or Series is passed, the Series or dict VALUES
101 will be used to determine the groups (the Series' values are first
102 aligned; see ``.align()`` method). If a list or ndarray of length
103 equal to the selected axis is passed (see the `groupby user guide
104 <https://pandas.pydata.org/pandas-docs/stable/user_guide/groupby.html#splitting-an-object-into-groups>`_),
105 the values are used as-is to determine the groups. A label or list
106 of labels may be passed to group by the columns in ``self``.
107 Notice that a tuple is interpreted as a (single) key.
108axis : {0 or 'index', 1 or 'columns'}, default 0
109 Split along rows (0) or columns (1). For `Series` this parameter
110 is unused and defaults to 0.
111level : int, level name, or sequence of such, default None
112 If the axis is a MultiIndex (hierarchical), group by a particular
113 level or levels. Do not specify both ``by`` and ``level``.
114as_index : bool, default True
115 For aggregated output, return object with group labels as the
116 index. Only relevant for DataFrame input. as_index=False is
117 effectively "SQL-style" grouped output.
118sort : bool, default True
119 Sort group keys. Get better performance by turning this off.
120 Note this does not influence the order of observations within each
121 group. Groupby preserves the order of rows within each group.
122
123 .. versionchanged:: 2.0.0
124
125 Specifying ``sort=False`` with an ordered categorical grouper will no
126 longer sort the values.
127
128group_keys : bool, default True
129 When calling apply and the ``by`` argument produces a like-indexed
130 (i.e. :ref:`a transform <groupby.transform>`) result, add group keys to
131 index to identify pieces. By default group keys are not included
132 when the result's index (and column) labels match the inputs, and
133 are included otherwise.
134
135 .. versionchanged:: 1.5.0
136
137 Warns that ``group_keys`` will no longer be ignored when the
138 result from ``apply`` is a like-indexed Series or DataFrame.
139 Specify ``group_keys`` explicitly to include the group keys or
140 not.
141
142 .. versionchanged:: 2.0.0
143
144 ``group_keys`` now defaults to ``True``.
145
146observed : bool, default False
147 This only applies if any of the groupers are Categoricals.
148 If True: only show observed values for categorical groupers.
149 If False: show all values for categorical groupers.
150dropna : bool, default True
151 If True, and if group keys contain NA values, NA values together
152 with row/column will be dropped.
153 If False, NA values will also be treated as the key in groups.
154
155 .. versionadded:: 1.1.0
156
157Returns
158-------
159%(klass)sGroupBy
160 Returns a groupby object that contains information about the groups.
161
162See Also
163--------
164resample : Convenience method for frequency conversion and resampling
165 of time series.
166
167Notes
168-----
169See the `user guide
170<https://pandas.pydata.org/pandas-docs/stable/groupby.html>`__ for more
171detailed usage and examples, including splitting an object into groups,
172iterating through groups, selecting a group, aggregation, and more.
173"""
174
175_shared_docs[
176 "melt"
177] = """
178Unpivot a DataFrame from wide to long format, optionally leaving identifiers set.
179
180This function is useful to massage a DataFrame into a format where one
181or more columns are identifier variables (`id_vars`), while all other
182columns, considered measured variables (`value_vars`), are "unpivoted" to
183the row axis, leaving just two non-identifier columns, 'variable' and
184'value'.
185
186Parameters
187----------
188id_vars : tuple, list, or ndarray, optional
189 Column(s) to use as identifier variables.
190value_vars : tuple, list, or ndarray, optional
191 Column(s) to unpivot. If not specified, uses all columns that
192 are not set as `id_vars`.
193var_name : scalar
194 Name to use for the 'variable' column. If None it uses
195 ``frame.columns.name`` or 'variable'.
196value_name : scalar, default 'value'
197 Name to use for the 'value' column.
198col_level : int or str, optional
199 If columns are a MultiIndex then use this level to melt.
200ignore_index : bool, default True
201 If True, original index is ignored. If False, the original index is retained.
202 Index labels will be repeated as necessary.
203
204 .. versionadded:: 1.1.0
205
206Returns
207-------
208DataFrame
209 Unpivoted DataFrame.
210
211See Also
212--------
213%(other)s : Identical method.
214pivot_table : Create a spreadsheet-style pivot table as a DataFrame.
215DataFrame.pivot : Return reshaped DataFrame organized
216 by given index / column values.
217DataFrame.explode : Explode a DataFrame from list-like
218 columns to long format.
219
220Notes
221-----
222Reference :ref:`the user guide <reshaping.melt>` for more examples.
223
224Examples
225--------
226>>> df = pd.DataFrame({'A': {0: 'a', 1: 'b', 2: 'c'},
227... 'B': {0: 1, 1: 3, 2: 5},
228... 'C': {0: 2, 1: 4, 2: 6}})
229>>> df
230 A B C
2310 a 1 2
2321 b 3 4
2332 c 5 6
234
235>>> %(caller)sid_vars=['A'], value_vars=['B'])
236 A variable value
2370 a B 1
2381 b B 3
2392 c B 5
240
241>>> %(caller)sid_vars=['A'], value_vars=['B', 'C'])
242 A variable value
2430 a B 1
2441 b B 3
2452 c B 5
2463 a C 2
2474 b C 4
2485 c C 6
249
250The names of 'variable' and 'value' columns can be customized:
251
252>>> %(caller)sid_vars=['A'], value_vars=['B'],
253... var_name='myVarname', value_name='myValname')
254 A myVarname myValname
2550 a B 1
2561 b B 3
2572 c B 5
258
259Original index values can be kept around:
260
261>>> %(caller)sid_vars=['A'], value_vars=['B', 'C'], ignore_index=False)
262 A variable value
2630 a B 1
2641 b B 3
2652 c B 5
2660 a C 2
2671 b C 4
2682 c C 6
269
270If you have multi-index columns:
271
272>>> df.columns = [list('ABC'), list('DEF')]
273>>> df
274 A B C
275 D E F
2760 a 1 2
2771 b 3 4
2782 c 5 6
279
280>>> %(caller)scol_level=0, id_vars=['A'], value_vars=['B'])
281 A variable value
2820 a B 1
2831 b B 3
2842 c B 5
285
286>>> %(caller)sid_vars=[('A', 'D')], value_vars=[('B', 'E')])
287 (A, D) variable_0 variable_1 value
2880 a B E 1
2891 b B E 3
2902 c B E 5
291"""
292
293_shared_docs[
294 "transform"
295] = """
296Call ``func`` on self producing a {klass} with the same axis shape as self.
297
298Parameters
299----------
300func : function, str, list-like or dict-like
301 Function to use for transforming the data. If a function, must either
302 work when passed a {klass} or when passed to {klass}.apply. If func
303 is both list-like and dict-like, dict-like behavior takes precedence.
304
305 Accepted combinations are:
306
307 - function
308 - string function name
309 - list-like of functions and/or function names, e.g. ``[np.exp, 'sqrt']``
310 - dict-like of axis labels -> functions, function names or list-like of such.
311{axis}
312*args
313 Positional arguments to pass to `func`.
314**kwargs
315 Keyword arguments to pass to `func`.
316
317Returns
318-------
319{klass}
320 A {klass} that must have the same length as self.
321
322Raises
323------
324ValueError : If the returned {klass} has a different length than self.
325
326See Also
327--------
328{klass}.agg : Only perform aggregating type operations.
329{klass}.apply : Invoke function on a {klass}.
330
331Notes
332-----
333Functions that mutate the passed object can produce unexpected
334behavior or errors and are not supported. See :ref:`gotchas.udf-mutation`
335for more details.
336
337Examples
338--------
339>>> df = pd.DataFrame({{'A': range(3), 'B': range(1, 4)}})
340>>> df
341 A B
3420 0 1
3431 1 2
3442 2 3
345>>> df.transform(lambda x: x + 1)
346 A B
3470 1 2
3481 2 3
3492 3 4
350
351Even though the resulting {klass} must have the same length as the
352input {klass}, it is possible to provide several input functions:
353
354>>> s = pd.Series(range(3))
355>>> s
3560 0
3571 1
3582 2
359dtype: int64
360>>> s.transform([np.sqrt, np.exp])
361 sqrt exp
3620 0.000000 1.000000
3631 1.000000 2.718282
3642 1.414214 7.389056
365
366You can call transform on a GroupBy object:
367
368>>> df = pd.DataFrame({{
369... "Date": [
370... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05",
371... "2015-05-08", "2015-05-07", "2015-05-06", "2015-05-05"],
372... "Data": [5, 8, 6, 1, 50, 100, 60, 120],
373... }})
374>>> df
375 Date Data
3760 2015-05-08 5
3771 2015-05-07 8
3782 2015-05-06 6
3793 2015-05-05 1
3804 2015-05-08 50
3815 2015-05-07 100
3826 2015-05-06 60
3837 2015-05-05 120
384>>> df.groupby('Date')['Data'].transform('sum')
3850 55
3861 108
3872 66
3883 121
3894 55
3905 108
3916 66
3927 121
393Name: Data, dtype: int64
394
395>>> df = pd.DataFrame({{
396... "c": [1, 1, 1, 2, 2, 2, 2],
397... "type": ["m", "n", "o", "m", "m", "n", "n"]
398... }})
399>>> df
400 c type
4010 1 m
4021 1 n
4032 1 o
4043 2 m
4054 2 m
4065 2 n
4076 2 n
408>>> df['size'] = df.groupby('c')['type'].transform(len)
409>>> df
410 c type size
4110 1 m 3
4121 1 n 3
4132 1 o 3
4143 2 m 4
4154 2 m 4
4165 2 n 4
4176 2 n 4
418"""
419
420_shared_docs[
421 "storage_options"
422] = """storage_options : dict, optional
423 Extra options that make sense for a particular storage connection, e.g.
424 host, port, username, password, etc. For HTTP(S) URLs the key-value pairs
425 are forwarded to ``urllib.request.Request`` as header options. For other
426 URLs (e.g. starting with "s3://", and "gcs://") the key-value pairs are
427 forwarded to ``fsspec.open``. Please see ``fsspec`` and ``urllib`` for more
428 details, and for more examples on storage options refer `here
429 <https://pandas.pydata.org/docs/user_guide/io.html?
430 highlight=storage_options#reading-writing-remote-files>`_."""
431
432_shared_docs[
433 "compression_options"
434] = """compression : str or dict, default 'infer'
435 For on-the-fly compression of the output data. If 'infer' and '%s' is
436 path-like, then detect compression from the following extensions: '.gz',
437 '.bz2', '.zip', '.xz', '.zst', '.tar', '.tar.gz', '.tar.xz' or '.tar.bz2'
438 (otherwise no compression).
439 Set to ``None`` for no compression.
440 Can also be a dict with key ``'method'`` set
441 to one of {``'zip'``, ``'gzip'``, ``'bz2'``, ``'zstd'``, ``'tar'``} and other
442 key-value pairs are forwarded to
443 ``zipfile.ZipFile``, ``gzip.GzipFile``,
444 ``bz2.BZ2File``, ``zstandard.ZstdCompressor`` or
445 ``tarfile.TarFile``, respectively.
446 As an example, the following could be passed for faster compression and to create
447 a reproducible gzip archive:
448 ``compression={'method': 'gzip', 'compresslevel': 1, 'mtime': 1}``.
449
450 .. versionadded:: 1.5.0
451 Added support for `.tar` files."""
452
453_shared_docs[
454 "decompression_options"
455] = """compression : str or dict, default 'infer'
456 For on-the-fly decompression of on-disk data. If 'infer' and '%s' is
457 path-like, then detect compression from the following extensions: '.gz',
458 '.bz2', '.zip', '.xz', '.zst', '.tar', '.tar.gz', '.tar.xz' or '.tar.bz2'
459 (otherwise no compression).
460 If using 'zip' or 'tar', the ZIP file must contain only one data file to be read in.
461 Set to ``None`` for no decompression.
462 Can also be a dict with key ``'method'`` set
463 to one of {``'zip'``, ``'gzip'``, ``'bz2'``, ``'zstd'``, ``'tar'``} and other
464 key-value pairs are forwarded to
465 ``zipfile.ZipFile``, ``gzip.GzipFile``,
466 ``bz2.BZ2File``, ``zstandard.ZstdDecompressor`` or
467 ``tarfile.TarFile``, respectively.
468 As an example, the following could be passed for Zstandard decompression using a
469 custom compression dictionary:
470 ``compression={'method': 'zstd', 'dict_data': my_compression_dict}``.
471
472 .. versionadded:: 1.5.0
473 Added support for `.tar` files."""
474
475_shared_docs[
476 "replace"
477] = """
478 Replace values given in `to_replace` with `value`.
479
480 Values of the {klass} are replaced with other values dynamically.
481 {replace_iloc}
482
483 Parameters
484 ----------
485 to_replace : str, regex, list, dict, Series, int, float, or None
486 How to find the values that will be replaced.
487
488 * numeric, str or regex:
489
490 - numeric: numeric values equal to `to_replace` will be
491 replaced with `value`
492 - str: string exactly matching `to_replace` will be replaced
493 with `value`
494 - regex: regexs matching `to_replace` will be replaced with
495 `value`
496
497 * list of str, regex, or numeric:
498
499 - First, if `to_replace` and `value` are both lists, they
500 **must** be the same length.
501 - Second, if ``regex=True`` then all of the strings in **both**
502 lists will be interpreted as regexs otherwise they will match
503 directly. This doesn't matter much for `value` since there
504 are only a few possible substitution regexes you can use.
505 - str, regex and numeric rules apply as above.
506
507 * dict:
508
509 - Dicts can be used to specify different replacement values
510 for different existing values. For example,
511 ``{{'a': 'b', 'y': 'z'}}`` replaces the value 'a' with 'b' and
512 'y' with 'z'. To use a dict in this way, the optional `value`
513 parameter should not be given.
514 - For a DataFrame a dict can specify that different values
515 should be replaced in different columns. For example,
516 ``{{'a': 1, 'b': 'z'}}`` looks for the value 1 in column 'a'
517 and the value 'z' in column 'b' and replaces these values
518 with whatever is specified in `value`. The `value` parameter
519 should not be ``None`` in this case. You can treat this as a
520 special case of passing two lists except that you are
521 specifying the column to search in.
522 - For a DataFrame nested dictionaries, e.g.,
523 ``{{'a': {{'b': np.nan}}}}``, are read as follows: look in column
524 'a' for the value 'b' and replace it with NaN. The optional `value`
525 parameter should not be specified to use a nested dict in this
526 way. You can nest regular expressions as well. Note that
527 column names (the top-level dictionary keys in a nested
528 dictionary) **cannot** be regular expressions.
529
530 * None:
531
532 - This means that the `regex` argument must be a string,
533 compiled regular expression, or list, dict, ndarray or
534 Series of such elements. If `value` is also ``None`` then
535 this **must** be a nested dictionary or Series.
536
537 See the examples section for examples of each of these.
538 value : scalar, dict, list, str, regex, default None
539 Value to replace any values matching `to_replace` with.
540 For a DataFrame a dict of values can be used to specify which
541 value to use for each column (columns not in the dict will not be
542 filled). Regular expressions, strings and lists or dicts of such
543 objects are also allowed.
544 {inplace}
545 limit : int, default None
546 Maximum size gap to forward or backward fill.
547 regex : bool or same types as `to_replace`, default False
548 Whether to interpret `to_replace` and/or `value` as regular
549 expressions. If this is ``True`` then `to_replace` *must* be a
550 string. Alternatively, this could be a regular expression or a
551 list, dict, or array of regular expressions in which case
552 `to_replace` must be ``None``.
553 method : {{'pad', 'ffill', 'bfill'}}
554 The method to use when for replacement, when `to_replace` is a
555 scalar, list or tuple and `value` is ``None``.
556
557 Returns
558 -------
559 {klass}
560 Object after replacement.
561
562 Raises
563 ------
564 AssertionError
565 * If `regex` is not a ``bool`` and `to_replace` is not
566 ``None``.
567
568 TypeError
569 * If `to_replace` is not a scalar, array-like, ``dict``, or ``None``
570 * If `to_replace` is a ``dict`` and `value` is not a ``list``,
571 ``dict``, ``ndarray``, or ``Series``
572 * If `to_replace` is ``None`` and `regex` is not compilable
573 into a regular expression or is a list, dict, ndarray, or
574 Series.
575 * When replacing multiple ``bool`` or ``datetime64`` objects and
576 the arguments to `to_replace` does not match the type of the
577 value being replaced
578
579 ValueError
580 * If a ``list`` or an ``ndarray`` is passed to `to_replace` and
581 `value` but they are not the same length.
582
583 See Also
584 --------
585 {klass}.fillna : Fill NA values.
586 {klass}.where : Replace values based on boolean condition.
587 Series.str.replace : Simple string replacement.
588
589 Notes
590 -----
591 * Regex substitution is performed under the hood with ``re.sub``. The
592 rules for substitution for ``re.sub`` are the same.
593 * Regular expressions will only substitute on strings, meaning you
594 cannot provide, for example, a regular expression matching floating
595 point numbers and expect the columns in your frame that have a
596 numeric dtype to be matched. However, if those floating point
597 numbers *are* strings, then you can do this.
598 * This method has *a lot* of options. You are encouraged to experiment
599 and play with this method to gain intuition about how it works.
600 * When dict is used as the `to_replace` value, it is like
601 key(s) in the dict are the to_replace part and
602 value(s) in the dict are the value parameter.
603
604 Examples
605 --------
606
607 **Scalar `to_replace` and `value`**
608
609 >>> s = pd.Series([1, 2, 3, 4, 5])
610 >>> s.replace(1, 5)
611 0 5
612 1 2
613 2 3
614 3 4
615 4 5
616 dtype: int64
617
618 >>> df = pd.DataFrame({{'A': [0, 1, 2, 3, 4],
619 ... 'B': [5, 6, 7, 8, 9],
620 ... 'C': ['a', 'b', 'c', 'd', 'e']}})
621 >>> df.replace(0, 5)
622 A B C
623 0 5 5 a
624 1 1 6 b
625 2 2 7 c
626 3 3 8 d
627 4 4 9 e
628
629 **List-like `to_replace`**
630
631 >>> df.replace([0, 1, 2, 3], 4)
632 A B C
633 0 4 5 a
634 1 4 6 b
635 2 4 7 c
636 3 4 8 d
637 4 4 9 e
638
639 >>> df.replace([0, 1, 2, 3], [4, 3, 2, 1])
640 A B C
641 0 4 5 a
642 1 3 6 b
643 2 2 7 c
644 3 1 8 d
645 4 4 9 e
646
647 >>> s.replace([1, 2], method='bfill')
648 0 3
649 1 3
650 2 3
651 3 4
652 4 5
653 dtype: int64
654
655 **dict-like `to_replace`**
656
657 >>> df.replace({{0: 10, 1: 100}})
658 A B C
659 0 10 5 a
660 1 100 6 b
661 2 2 7 c
662 3 3 8 d
663 4 4 9 e
664
665 >>> df.replace({{'A': 0, 'B': 5}}, 100)
666 A B C
667 0 100 100 a
668 1 1 6 b
669 2 2 7 c
670 3 3 8 d
671 4 4 9 e
672
673 >>> df.replace({{'A': {{0: 100, 4: 400}}}})
674 A B C
675 0 100 5 a
676 1 1 6 b
677 2 2 7 c
678 3 3 8 d
679 4 400 9 e
680
681 **Regular expression `to_replace`**
682
683 >>> df = pd.DataFrame({{'A': ['bat', 'foo', 'bait'],
684 ... 'B': ['abc', 'bar', 'xyz']}})
685 >>> df.replace(to_replace=r'^ba.$', value='new', regex=True)
686 A B
687 0 new abc
688 1 foo new
689 2 bait xyz
690
691 >>> df.replace({{'A': r'^ba.$'}}, {{'A': 'new'}}, regex=True)
692 A B
693 0 new abc
694 1 foo bar
695 2 bait xyz
696
697 >>> df.replace(regex=r'^ba.$', value='new')
698 A B
699 0 new abc
700 1 foo new
701 2 bait xyz
702
703 >>> df.replace(regex={{r'^ba.$': 'new', 'foo': 'xyz'}})
704 A B
705 0 new abc
706 1 xyz new
707 2 bait xyz
708
709 >>> df.replace(regex=[r'^ba.$', 'foo'], value='new')
710 A B
711 0 new abc
712 1 new new
713 2 bait xyz
714
715 Compare the behavior of ``s.replace({{'a': None}})`` and
716 ``s.replace('a', None)`` to understand the peculiarities
717 of the `to_replace` parameter:
718
719 >>> s = pd.Series([10, 'a', 'a', 'b', 'a'])
720
721 When one uses a dict as the `to_replace` value, it is like the
722 value(s) in the dict are equal to the `value` parameter.
723 ``s.replace({{'a': None}})`` is equivalent to
724 ``s.replace(to_replace={{'a': None}}, value=None, method=None)``:
725
726 >>> s.replace({{'a': None}})
727 0 10
728 1 None
729 2 None
730 3 b
731 4 None
732 dtype: object
733
734 When ``value`` is not explicitly passed and `to_replace` is a scalar, list
735 or tuple, `replace` uses the method parameter (default 'pad') to do the
736 replacement. So this is why the 'a' values are being replaced by 10
737 in rows 1 and 2 and 'b' in row 4 in this case.
738
739 >>> s.replace('a')
740 0 10
741 1 10
742 2 10
743 3 b
744 4 b
745 dtype: object
746
747 On the other hand, if ``None`` is explicitly passed for ``value``, it will
748 be respected:
749
750 >>> s.replace('a', None)
751 0 10
752 1 None
753 2 None
754 3 b
755 4 None
756 dtype: object
757
758 .. versionchanged:: 1.4.0
759 Previously the explicit ``None`` was silently ignored.
760"""
761
762_shared_docs[
763 "idxmin"
764] = """
765 Return index of first occurrence of minimum over requested axis.
766
767 NA/null values are excluded.
768
769 Parameters
770 ----------
771 axis : {{0 or 'index', 1 or 'columns'}}, default 0
772 The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise.
773 skipna : bool, default True
774 Exclude NA/null values. If an entire row/column is NA, the result
775 will be NA.
776 numeric_only : bool, default {numeric_only_default}
777 Include only `float`, `int` or `boolean` data.
778
779 .. versionadded:: 1.5.0
780
781 Returns
782 -------
783 Series
784 Indexes of minima along the specified axis.
785
786 Raises
787 ------
788 ValueError
789 * If the row/column is empty
790
791 See Also
792 --------
793 Series.idxmin : Return index of the minimum element.
794
795 Notes
796 -----
797 This method is the DataFrame version of ``ndarray.argmin``.
798
799 Examples
800 --------
801 Consider a dataset containing food consumption in Argentina.
802
803 >>> df = pd.DataFrame({{'consumption': [10.51, 103.11, 55.48],
804 ... 'co2_emissions': [37.2, 19.66, 1712]}},
805 ... index=['Pork', 'Wheat Products', 'Beef'])
806
807 >>> df
808 consumption co2_emissions
809 Pork 10.51 37.20
810 Wheat Products 103.11 19.66
811 Beef 55.48 1712.00
812
813 By default, it returns the index for the minimum value in each column.
814
815 >>> df.idxmin()
816 consumption Pork
817 co2_emissions Wheat Products
818 dtype: object
819
820 To return the index for the minimum value in each row, use ``axis="columns"``.
821
822 >>> df.idxmin(axis="columns")
823 Pork consumption
824 Wheat Products co2_emissions
825 Beef consumption
826 dtype: object
827"""
828
829_shared_docs[
830 "idxmax"
831] = """
832 Return index of first occurrence of maximum over requested axis.
833
834 NA/null values are excluded.
835
836 Parameters
837 ----------
838 axis : {{0 or 'index', 1 or 'columns'}}, default 0
839 The axis to use. 0 or 'index' for row-wise, 1 or 'columns' for column-wise.
840 skipna : bool, default True
841 Exclude NA/null values. If an entire row/column is NA, the result
842 will be NA.
843 numeric_only : bool, default {numeric_only_default}
844 Include only `float`, `int` or `boolean` data.
845
846 .. versionadded:: 1.5.0
847
848 Returns
849 -------
850 Series
851 Indexes of maxima along the specified axis.
852
853 Raises
854 ------
855 ValueError
856 * If the row/column is empty
857
858 See Also
859 --------
860 Series.idxmax : Return index of the maximum element.
861
862 Notes
863 -----
864 This method is the DataFrame version of ``ndarray.argmax``.
865
866 Examples
867 --------
868 Consider a dataset containing food consumption in Argentina.
869
870 >>> df = pd.DataFrame({{'consumption': [10.51, 103.11, 55.48],
871 ... 'co2_emissions': [37.2, 19.66, 1712]}},
872 ... index=['Pork', 'Wheat Products', 'Beef'])
873
874 >>> df
875 consumption co2_emissions
876 Pork 10.51 37.20
877 Wheat Products 103.11 19.66
878 Beef 55.48 1712.00
879
880 By default, it returns the index for the maximum value in each column.
881
882 >>> df.idxmax()
883 consumption Wheat Products
884 co2_emissions Beef
885 dtype: object
886
887 To return the index for the maximum value in each row, use ``axis="columns"``.
888
889 >>> df.idxmax(axis="columns")
890 Pork co2_emissions
891 Wheat Products consumption
892 Beef co2_emissions
893 dtype: object
894"""