Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.8/site-packages/scipy/optimize/_linprog_doc.py: 54%
13 statements
« prev ^ index » next coverage.py v7.3.2, created at 2023-12-12 06:31 +0000
« prev ^ index » next coverage.py v7.3.2, created at 2023-12-12 06:31 +0000
1# -*- coding: utf-8 -*-
2"""
3Created on Sat Aug 22 19:49:17 2020
5@author: matth
6"""
9def _linprog_highs_doc(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
10 bounds=None, method='highs', callback=None,
11 maxiter=None, disp=False, presolve=True,
12 time_limit=None,
13 dual_feasibility_tolerance=None,
14 primal_feasibility_tolerance=None,
15 ipm_optimality_tolerance=None,
16 simplex_dual_edge_weight_strategy=None,
17 mip_rel_gap=None,
18 **unknown_options):
19 r"""
20 Linear programming: minimize a linear objective function subject to linear
21 equality and inequality constraints using one of the HiGHS solvers.
23 Linear programming solves problems of the following form:
25 .. math::
27 \min_x \ & c^T x \\
28 \mbox{such that} \ & A_{ub} x \leq b_{ub},\\
29 & A_{eq} x = b_{eq},\\
30 & l \leq x \leq u ,
32 where :math:`x` is a vector of decision variables; :math:`c`,
33 :math:`b_{ub}`, :math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and
34 :math:`A_{ub}` and :math:`A_{eq}` are matrices.
36 Alternatively, that's:
38 minimize::
40 c @ x
42 such that::
44 A_ub @ x <= b_ub
45 A_eq @ x == b_eq
46 lb <= x <= ub
48 Note that by default ``lb = 0`` and ``ub = None`` unless specified with
49 ``bounds``.
51 Parameters
52 ----------
53 c : 1-D array
54 The coefficients of the linear objective function to be minimized.
55 A_ub : 2-D array, optional
56 The inequality constraint matrix. Each row of ``A_ub`` specifies the
57 coefficients of a linear inequality constraint on ``x``.
58 b_ub : 1-D array, optional
59 The inequality constraint vector. Each element represents an
60 upper bound on the corresponding value of ``A_ub @ x``.
61 A_eq : 2-D array, optional
62 The equality constraint matrix. Each row of ``A_eq`` specifies the
63 coefficients of a linear equality constraint on ``x``.
64 b_eq : 1-D array, optional
65 The equality constraint vector. Each element of ``A_eq @ x`` must equal
66 the corresponding element of ``b_eq``.
67 bounds : sequence, optional
68 A sequence of ``(min, max)`` pairs for each element in ``x``, defining
69 the minimum and maximum values of that decision variable. Use ``None``
70 to indicate that there is no bound. By default, bounds are
71 ``(0, None)`` (all decision variables are non-negative).
72 If a single tuple ``(min, max)`` is provided, then ``min`` and
73 ``max`` will serve as bounds for all decision variables.
74 method : str
76 This is the method-specific documentation for 'highs', which chooses
77 automatically between
78 :ref:`'highs-ds' <optimize.linprog-highs-ds>` and
79 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`.
80 :ref:`'interior-point' <optimize.linprog-interior-point>` (default),
81 :ref:`'revised simplex' <optimize.linprog-revised_simplex>`, and
82 :ref:`'simplex' <optimize.linprog-simplex>` (legacy)
83 are also available.
84 integrality : 1-D array or int, optional
85 Indicates the type of integrality constraint on each decision variable.
87 ``0`` : Continuous variable; no integrality constraint.
89 ``1`` : Integer variable; decision variable must be an integer
90 within `bounds`.
92 ``2`` : Semi-continuous variable; decision variable must be within
93 `bounds` or take value ``0``.
95 ``3`` : Semi-integer variable; decision variable must be an integer
96 within `bounds` or take value ``0``.
98 By default, all variables are continuous.
100 For mixed integrality constraints, supply an array of shape `c.shape`.
101 To infer a constraint on each decision variable from shorter inputs,
102 the argument will be broadcasted to `c.shape` using `np.broadcast_to`.
104 This argument is currently used only by the ``'highs'`` method and
105 ignored otherwise.
107 Options
108 -------
109 maxiter : int
110 The maximum number of iterations to perform in either phase.
111 For :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`, this does not
112 include the number of crossover iterations. Default is the largest
113 possible value for an ``int`` on the platform.
114 disp : bool (default: ``False``)
115 Set to ``True`` if indicators of optimization status are to be
116 printed to the console during optimization.
117 presolve : bool (default: ``True``)
118 Presolve attempts to identify trivial infeasibilities,
119 identify trivial unboundedness, and simplify the problem before
120 sending it to the main solver. It is generally recommended
121 to keep the default setting ``True``; set to ``False`` if
122 presolve is to be disabled.
123 time_limit : float
124 The maximum time in seconds allotted to solve the problem;
125 default is the largest possible value for a ``double`` on the
126 platform.
127 dual_feasibility_tolerance : double (default: 1e-07)
128 Dual feasibility tolerance for
129 :ref:`'highs-ds' <optimize.linprog-highs-ds>`.
130 The minimum of this and ``primal_feasibility_tolerance``
131 is used for the feasibility tolerance of
132 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`.
133 primal_feasibility_tolerance : double (default: 1e-07)
134 Primal feasibility tolerance for
135 :ref:`'highs-ds' <optimize.linprog-highs-ds>`.
136 The minimum of this and ``dual_feasibility_tolerance``
137 is used for the feasibility tolerance of
138 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`.
139 ipm_optimality_tolerance : double (default: ``1e-08``)
140 Optimality tolerance for
141 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`.
142 Minimum allowable value is 1e-12.
143 simplex_dual_edge_weight_strategy : str (default: None)
144 Strategy for simplex dual edge weights. The default, ``None``,
145 automatically selects one of the following.
147 ``'dantzig'`` uses Dantzig's original strategy of choosing the most
148 negative reduced cost.
150 ``'devex'`` uses the strategy described in [15]_.
152 ``steepest`` uses the exact steepest edge strategy as described in
153 [16]_.
155 ``'steepest-devex'`` begins with the exact steepest edge strategy
156 until the computation is too costly or inexact and then switches to
157 the devex method.
159 Curently, ``None`` always selects ``'steepest-devex'``, but this
160 may change as new options become available.
161 mip_rel_gap : double (default: None)
162 Termination criterion for MIP solver: solver will terminate when the
163 gap between the primal objective value and the dual objective bound,
164 scaled by the primal objective value, is <= mip_rel_gap.
165 unknown_options : dict
166 Optional arguments not used by this particular solver. If
167 ``unknown_options`` is non-empty, a warning is issued listing
168 all unused options.
170 Returns
171 -------
172 res : OptimizeResult
173 A :class:`scipy.optimize.OptimizeResult` consisting of the fields:
175 x : 1D array
176 The values of the decision variables that minimizes the
177 objective function while satisfying the constraints.
178 fun : float
179 The optimal value of the objective function ``c @ x``.
180 slack : 1D array
181 The (nominally positive) values of the slack,
182 ``b_ub - A_ub @ x``.
183 con : 1D array
184 The (nominally zero) residuals of the equality constraints,
185 ``b_eq - A_eq @ x``.
186 success : bool
187 ``True`` when the algorithm succeeds in finding an optimal
188 solution.
189 status : int
190 An integer representing the exit status of the algorithm.
192 ``0`` : Optimization terminated successfully.
194 ``1`` : Iteration or time limit reached.
196 ``2`` : Problem appears to be infeasible.
198 ``3`` : Problem appears to be unbounded.
200 ``4`` : The HiGHS solver ran into a problem.
202 message : str
203 A string descriptor of the exit status of the algorithm.
204 nit : int
205 The total number of iterations performed.
206 For the HiGHS simplex method, this includes iterations in all
207 phases. For the HiGHS interior-point method, this does not include
208 crossover iterations.
209 crossover_nit : int
210 The number of primal/dual pushes performed during the
211 crossover routine for the HiGHS interior-point method.
212 This is ``0`` for the HiGHS simplex method.
213 ineqlin : OptimizeResult
214 Solution and sensitivity information corresponding to the
215 inequality constraints, `b_ub`. A dictionary consisting of the
216 fields:
218 residual : np.ndnarray
219 The (nominally positive) values of the slack variables,
220 ``b_ub - A_ub @ x``. This quantity is also commonly
221 referred to as "slack".
223 marginals : np.ndarray
224 The sensitivity (partial derivative) of the objective
225 function with respect to the right-hand side of the
226 inequality constraints, `b_ub`.
228 eqlin : OptimizeResult
229 Solution and sensitivity information corresponding to the
230 equality constraints, `b_eq`. A dictionary consisting of the
231 fields:
233 residual : np.ndarray
234 The (nominally zero) residuals of the equality constraints,
235 ``b_eq - A_eq @ x``.
237 marginals : np.ndarray
238 The sensitivity (partial derivative) of the objective
239 function with respect to the right-hand side of the
240 equality constraints, `b_eq`.
242 lower, upper : OptimizeResult
243 Solution and sensitivity information corresponding to the
244 lower and upper bounds on decision variables, `bounds`.
246 residual : np.ndarray
247 The (nominally positive) values of the quantity
248 ``x - lb`` (lower) or ``ub - x`` (upper).
250 marginals : np.ndarray
251 The sensitivity (partial derivative) of the objective
252 function with respect to the lower and upper
253 `bounds`.
255 Notes
256 -----
258 Method :ref:`'highs-ds' <optimize.linprog-highs-ds>` is a wrapper
259 of the C++ high performance dual revised simplex implementation (HSOL)
260 [13]_, [14]_. Method :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`
261 is a wrapper of a C++ implementation of an **i**\ nterior-\ **p**\ oint
262 **m**\ ethod [13]_; it features a crossover routine, so it is as accurate
263 as a simplex solver. Method :ref:`'highs' <optimize.linprog-highs>` chooses
264 between the two automatically. For new code involving `linprog`, we
265 recommend explicitly choosing one of these three method values instead of
266 :ref:`'interior-point' <optimize.linprog-interior-point>` (default),
267 :ref:`'revised simplex' <optimize.linprog-revised_simplex>`, and
268 :ref:`'simplex' <optimize.linprog-simplex>` (legacy).
270 The result fields `ineqlin`, `eqlin`, `lower`, and `upper` all contain
271 `marginals`, or partial derivatives of the objective function with respect
272 to the right-hand side of each constraint. These partial derivatives are
273 also referred to as "Lagrange multipliers", "dual values", and
274 "shadow prices". The sign convention of `marginals` is opposite that
275 of Lagrange multipliers produced by many nonlinear solvers.
277 References
278 ----------
279 .. [13] Huangfu, Q., Galabova, I., Feldmeier, M., and Hall, J. A. J.
280 "HiGHS - high performance software for linear optimization."
281 https://highs.dev/
282 .. [14] Huangfu, Q. and Hall, J. A. J. "Parallelizing the dual revised
283 simplex method." Mathematical Programming Computation, 10 (1),
284 119-142, 2018. DOI: 10.1007/s12532-017-0130-5
285 .. [15] Harris, Paula MJ. "Pivot selection methods of the Devex LP code."
286 Mathematical programming 5.1 (1973): 1-28.
287 .. [16] Goldfarb, Donald, and John Ker Reid. "A practicable steepest-edge
288 simplex algorithm." Mathematical Programming 12.1 (1977): 361-371.
289 """
290 pass
293def _linprog_highs_ds_doc(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
294 bounds=None, method='highs-ds', callback=None,
295 maxiter=None, disp=False, presolve=True,
296 time_limit=None,
297 dual_feasibility_tolerance=None,
298 primal_feasibility_tolerance=None,
299 simplex_dual_edge_weight_strategy=None,
300 **unknown_options):
301 r"""
302 Linear programming: minimize a linear objective function subject to linear
303 equality and inequality constraints using the HiGHS dual simplex solver.
305 Linear programming solves problems of the following form:
307 .. math::
309 \min_x \ & c^T x \\
310 \mbox{such that} \ & A_{ub} x \leq b_{ub},\\
311 & A_{eq} x = b_{eq},\\
312 & l \leq x \leq u ,
314 where :math:`x` is a vector of decision variables; :math:`c`,
315 :math:`b_{ub}`, :math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and
316 :math:`A_{ub}` and :math:`A_{eq}` are matrices.
318 Alternatively, that's:
320 minimize::
322 c @ x
324 such that::
326 A_ub @ x <= b_ub
327 A_eq @ x == b_eq
328 lb <= x <= ub
330 Note that by default ``lb = 0`` and ``ub = None`` unless specified with
331 ``bounds``.
333 Parameters
334 ----------
335 c : 1-D array
336 The coefficients of the linear objective function to be minimized.
337 A_ub : 2-D array, optional
338 The inequality constraint matrix. Each row of ``A_ub`` specifies the
339 coefficients of a linear inequality constraint on ``x``.
340 b_ub : 1-D array, optional
341 The inequality constraint vector. Each element represents an
342 upper bound on the corresponding value of ``A_ub @ x``.
343 A_eq : 2-D array, optional
344 The equality constraint matrix. Each row of ``A_eq`` specifies the
345 coefficients of a linear equality constraint on ``x``.
346 b_eq : 1-D array, optional
347 The equality constraint vector. Each element of ``A_eq @ x`` must equal
348 the corresponding element of ``b_eq``.
349 bounds : sequence, optional
350 A sequence of ``(min, max)`` pairs for each element in ``x``, defining
351 the minimum and maximum values of that decision variable. Use ``None``
352 to indicate that there is no bound. By default, bounds are
353 ``(0, None)`` (all decision variables are non-negative).
354 If a single tuple ``(min, max)`` is provided, then ``min`` and
355 ``max`` will serve as bounds for all decision variables.
356 method : str
358 This is the method-specific documentation for 'highs-ds'.
359 :ref:`'highs' <optimize.linprog-highs>`,
360 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`,
361 :ref:`'interior-point' <optimize.linprog-interior-point>` (default),
362 :ref:`'revised simplex' <optimize.linprog-revised_simplex>`, and
363 :ref:`'simplex' <optimize.linprog-simplex>` (legacy)
364 are also available.
366 Options
367 -------
368 maxiter : int
369 The maximum number of iterations to perform in either phase.
370 Default is the largest possible value for an ``int`` on the platform.
371 disp : bool (default: ``False``)
372 Set to ``True`` if indicators of optimization status are to be
373 printed to the console during optimization.
374 presolve : bool (default: ``True``)
375 Presolve attempts to identify trivial infeasibilities,
376 identify trivial unboundedness, and simplify the problem before
377 sending it to the main solver. It is generally recommended
378 to keep the default setting ``True``; set to ``False`` if
379 presolve is to be disabled.
380 time_limit : float
381 The maximum time in seconds allotted to solve the problem;
382 default is the largest possible value for a ``double`` on the
383 platform.
384 dual_feasibility_tolerance : double (default: 1e-07)
385 Dual feasibility tolerance for
386 :ref:`'highs-ds' <optimize.linprog-highs-ds>`.
387 primal_feasibility_tolerance : double (default: 1e-07)
388 Primal feasibility tolerance for
389 :ref:`'highs-ds' <optimize.linprog-highs-ds>`.
390 simplex_dual_edge_weight_strategy : str (default: None)
391 Strategy for simplex dual edge weights. The default, ``None``,
392 automatically selects one of the following.
394 ``'dantzig'`` uses Dantzig's original strategy of choosing the most
395 negative reduced cost.
397 ``'devex'`` uses the strategy described in [15]_.
399 ``steepest`` uses the exact steepest edge strategy as described in
400 [16]_.
402 ``'steepest-devex'`` begins with the exact steepest edge strategy
403 until the computation is too costly or inexact and then switches to
404 the devex method.
406 Curently, ``None`` always selects ``'steepest-devex'``, but this
407 may change as new options become available.
408 unknown_options : dict
409 Optional arguments not used by this particular solver. If
410 ``unknown_options`` is non-empty, a warning is issued listing
411 all unused options.
413 Returns
414 -------
415 res : OptimizeResult
416 A :class:`scipy.optimize.OptimizeResult` consisting of the fields:
418 x : 1D array
419 The values of the decision variables that minimizes the
420 objective function while satisfying the constraints.
421 fun : float
422 The optimal value of the objective function ``c @ x``.
423 slack : 1D array
424 The (nominally positive) values of the slack,
425 ``b_ub - A_ub @ x``.
426 con : 1D array
427 The (nominally zero) residuals of the equality constraints,
428 ``b_eq - A_eq @ x``.
429 success : bool
430 ``True`` when the algorithm succeeds in finding an optimal
431 solution.
432 status : int
433 An integer representing the exit status of the algorithm.
435 ``0`` : Optimization terminated successfully.
437 ``1`` : Iteration or time limit reached.
439 ``2`` : Problem appears to be infeasible.
441 ``3`` : Problem appears to be unbounded.
443 ``4`` : The HiGHS solver ran into a problem.
445 message : str
446 A string descriptor of the exit status of the algorithm.
447 nit : int
448 The total number of iterations performed. This includes iterations
449 in all phases.
450 crossover_nit : int
451 This is always ``0`` for the HiGHS simplex method.
452 For the HiGHS interior-point method, this is the number of
453 primal/dual pushes performed during the crossover routine.
454 ineqlin : OptimizeResult
455 Solution and sensitivity information corresponding to the
456 inequality constraints, `b_ub`. A dictionary consisting of the
457 fields:
459 residual : np.ndnarray
460 The (nominally positive) values of the slack variables,
461 ``b_ub - A_ub @ x``. This quantity is also commonly
462 referred to as "slack".
464 marginals : np.ndarray
465 The sensitivity (partial derivative) of the objective
466 function with respect to the right-hand side of the
467 inequality constraints, `b_ub`.
469 eqlin : OptimizeResult
470 Solution and sensitivity information corresponding to the
471 equality constraints, `b_eq`. A dictionary consisting of the
472 fields:
474 residual : np.ndarray
475 The (nominally zero) residuals of the equality constraints,
476 ``b_eq - A_eq @ x``.
478 marginals : np.ndarray
479 The sensitivity (partial derivative) of the objective
480 function with respect to the right-hand side of the
481 equality constraints, `b_eq`.
483 lower, upper : OptimizeResult
484 Solution and sensitivity information corresponding to the
485 lower and upper bounds on decision variables, `bounds`.
487 residual : np.ndarray
488 The (nominally positive) values of the quantity
489 ``x - lb`` (lower) or ``ub - x`` (upper).
491 marginals : np.ndarray
492 The sensitivity (partial derivative) of the objective
493 function with respect to the lower and upper
494 `bounds`.
496 Notes
497 -----
499 Method :ref:`'highs-ds' <optimize.linprog-highs-ds>` is a wrapper
500 of the C++ high performance dual revised simplex implementation (HSOL)
501 [13]_, [14]_. Method :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`
502 is a wrapper of a C++ implementation of an **i**\ nterior-\ **p**\ oint
503 **m**\ ethod [13]_; it features a crossover routine, so it is as accurate
504 as a simplex solver. Method :ref:`'highs' <optimize.linprog-highs>` chooses
505 between the two automatically. For new code involving `linprog`, we
506 recommend explicitly choosing one of these three method values instead of
507 :ref:`'interior-point' <optimize.linprog-interior-point>` (default),
508 :ref:`'revised simplex' <optimize.linprog-revised_simplex>`, and
509 :ref:`'simplex' <optimize.linprog-simplex>` (legacy).
511 The result fields `ineqlin`, `eqlin`, `lower`, and `upper` all contain
512 `marginals`, or partial derivatives of the objective function with respect
513 to the right-hand side of each constraint. These partial derivatives are
514 also referred to as "Lagrange multipliers", "dual values", and
515 "shadow prices". The sign convention of `marginals` is opposite that
516 of Lagrange multipliers produced by many nonlinear solvers.
518 References
519 ----------
520 .. [13] Huangfu, Q., Galabova, I., Feldmeier, M., and Hall, J. A. J.
521 "HiGHS - high performance software for linear optimization."
522 https://highs.dev/
523 .. [14] Huangfu, Q. and Hall, J. A. J. "Parallelizing the dual revised
524 simplex method." Mathematical Programming Computation, 10 (1),
525 119-142, 2018. DOI: 10.1007/s12532-017-0130-5
526 .. [15] Harris, Paula MJ. "Pivot selection methods of the Devex LP code."
527 Mathematical programming 5.1 (1973): 1-28.
528 .. [16] Goldfarb, Donald, and John Ker Reid. "A practicable steepest-edge
529 simplex algorithm." Mathematical Programming 12.1 (1977): 361-371.
530 """
531 pass
534def _linprog_highs_ipm_doc(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
535 bounds=None, method='highs-ipm', callback=None,
536 maxiter=None, disp=False, presolve=True,
537 time_limit=None,
538 dual_feasibility_tolerance=None,
539 primal_feasibility_tolerance=None,
540 ipm_optimality_tolerance=None,
541 **unknown_options):
542 r"""
543 Linear programming: minimize a linear objective function subject to linear
544 equality and inequality constraints using the HiGHS interior point solver.
546 Linear programming solves problems of the following form:
548 .. math::
550 \min_x \ & c^T x \\
551 \mbox{such that} \ & A_{ub} x \leq b_{ub},\\
552 & A_{eq} x = b_{eq},\\
553 & l \leq x \leq u ,
555 where :math:`x` is a vector of decision variables; :math:`c`,
556 :math:`b_{ub}`, :math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and
557 :math:`A_{ub}` and :math:`A_{eq}` are matrices.
559 Alternatively, that's:
561 minimize::
563 c @ x
565 such that::
567 A_ub @ x <= b_ub
568 A_eq @ x == b_eq
569 lb <= x <= ub
571 Note that by default ``lb = 0`` and ``ub = None`` unless specified with
572 ``bounds``.
574 Parameters
575 ----------
576 c : 1-D array
577 The coefficients of the linear objective function to be minimized.
578 A_ub : 2-D array, optional
579 The inequality constraint matrix. Each row of ``A_ub`` specifies the
580 coefficients of a linear inequality constraint on ``x``.
581 b_ub : 1-D array, optional
582 The inequality constraint vector. Each element represents an
583 upper bound on the corresponding value of ``A_ub @ x``.
584 A_eq : 2-D array, optional
585 The equality constraint matrix. Each row of ``A_eq`` specifies the
586 coefficients of a linear equality constraint on ``x``.
587 b_eq : 1-D array, optional
588 The equality constraint vector. Each element of ``A_eq @ x`` must equal
589 the corresponding element of ``b_eq``.
590 bounds : sequence, optional
591 A sequence of ``(min, max)`` pairs for each element in ``x``, defining
592 the minimum and maximum values of that decision variable. Use ``None``
593 to indicate that there is no bound. By default, bounds are
594 ``(0, None)`` (all decision variables are non-negative).
595 If a single tuple ``(min, max)`` is provided, then ``min`` and
596 ``max`` will serve as bounds for all decision variables.
597 method : str
599 This is the method-specific documentation for 'highs-ipm'.
600 :ref:`'highs-ipm' <optimize.linprog-highs>`,
601 :ref:`'highs-ds' <optimize.linprog-highs-ds>`,
602 :ref:`'interior-point' <optimize.linprog-interior-point>` (default),
603 :ref:`'revised simplex' <optimize.linprog-revised_simplex>`, and
604 :ref:`'simplex' <optimize.linprog-simplex>` (legacy)
605 are also available.
607 Options
608 -------
609 maxiter : int
610 The maximum number of iterations to perform in either phase.
611 For :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`, this does not
612 include the number of crossover iterations. Default is the largest
613 possible value for an ``int`` on the platform.
614 disp : bool (default: ``False``)
615 Set to ``True`` if indicators of optimization status are to be
616 printed to the console during optimization.
617 presolve : bool (default: ``True``)
618 Presolve attempts to identify trivial infeasibilities,
619 identify trivial unboundedness, and simplify the problem before
620 sending it to the main solver. It is generally recommended
621 to keep the default setting ``True``; set to ``False`` if
622 presolve is to be disabled.
623 time_limit : float
624 The maximum time in seconds allotted to solve the problem;
625 default is the largest possible value for a ``double`` on the
626 platform.
627 dual_feasibility_tolerance : double (default: 1e-07)
628 The minimum of this and ``primal_feasibility_tolerance``
629 is used for the feasibility tolerance of
630 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`.
631 primal_feasibility_tolerance : double (default: 1e-07)
632 The minimum of this and ``dual_feasibility_tolerance``
633 is used for the feasibility tolerance of
634 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`.
635 ipm_optimality_tolerance : double (default: ``1e-08``)
636 Optimality tolerance for
637 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`.
638 Minimum allowable value is 1e-12.
639 unknown_options : dict
640 Optional arguments not used by this particular solver. If
641 ``unknown_options`` is non-empty, a warning is issued listing
642 all unused options.
644 Returns
645 -------
646 res : OptimizeResult
647 A :class:`scipy.optimize.OptimizeResult` consisting of the fields:
649 x : 1D array
650 The values of the decision variables that minimizes the
651 objective function while satisfying the constraints.
652 fun : float
653 The optimal value of the objective function ``c @ x``.
654 slack : 1D array
655 The (nominally positive) values of the slack,
656 ``b_ub - A_ub @ x``.
657 con : 1D array
658 The (nominally zero) residuals of the equality constraints,
659 ``b_eq - A_eq @ x``.
660 success : bool
661 ``True`` when the algorithm succeeds in finding an optimal
662 solution.
663 status : int
664 An integer representing the exit status of the algorithm.
666 ``0`` : Optimization terminated successfully.
668 ``1`` : Iteration or time limit reached.
670 ``2`` : Problem appears to be infeasible.
672 ``3`` : Problem appears to be unbounded.
674 ``4`` : The HiGHS solver ran into a problem.
676 message : str
677 A string descriptor of the exit status of the algorithm.
678 nit : int
679 The total number of iterations performed.
680 For the HiGHS interior-point method, this does not include
681 crossover iterations.
682 crossover_nit : int
683 The number of primal/dual pushes performed during the
684 crossover routine for the HiGHS interior-point method.
685 ineqlin : OptimizeResult
686 Solution and sensitivity information corresponding to the
687 inequality constraints, `b_ub`. A dictionary consisting of the
688 fields:
690 residual : np.ndnarray
691 The (nominally positive) values of the slack variables,
692 ``b_ub - A_ub @ x``. This quantity is also commonly
693 referred to as "slack".
695 marginals : np.ndarray
696 The sensitivity (partial derivative) of the objective
697 function with respect to the right-hand side of the
698 inequality constraints, `b_ub`.
700 eqlin : OptimizeResult
701 Solution and sensitivity information corresponding to the
702 equality constraints, `b_eq`. A dictionary consisting of the
703 fields:
705 residual : np.ndarray
706 The (nominally zero) residuals of the equality constraints,
707 ``b_eq - A_eq @ x``.
709 marginals : np.ndarray
710 The sensitivity (partial derivative) of the objective
711 function with respect to the right-hand side of the
712 equality constraints, `b_eq`.
714 lower, upper : OptimizeResult
715 Solution and sensitivity information corresponding to the
716 lower and upper bounds on decision variables, `bounds`.
718 residual : np.ndarray
719 The (nominally positive) values of the quantity
720 ``x - lb`` (lower) or ``ub - x`` (upper).
722 marginals : np.ndarray
723 The sensitivity (partial derivative) of the objective
724 function with respect to the lower and upper
725 `bounds`.
727 Notes
728 -----
730 Method :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`
731 is a wrapper of a C++ implementation of an **i**\ nterior-\ **p**\ oint
732 **m**\ ethod [13]_; it features a crossover routine, so it is as accurate
733 as a simplex solver.
734 Method :ref:`'highs-ds' <optimize.linprog-highs-ds>` is a wrapper
735 of the C++ high performance dual revised simplex implementation (HSOL)
736 [13]_, [14]_. Method :ref:`'highs' <optimize.linprog-highs>` chooses
737 between the two automatically. For new code involving `linprog`, we
738 recommend explicitly choosing one of these three method values instead of
739 :ref:`'interior-point' <optimize.linprog-interior-point>` (default),
740 :ref:`'revised simplex' <optimize.linprog-revised_simplex>`, and
741 :ref:`'simplex' <optimize.linprog-simplex>` (legacy).
743 The result fields `ineqlin`, `eqlin`, `lower`, and `upper` all contain
744 `marginals`, or partial derivatives of the objective function with respect
745 to the right-hand side of each constraint. These partial derivatives are
746 also referred to as "Lagrange multipliers", "dual values", and
747 "shadow prices". The sign convention of `marginals` is opposite that
748 of Lagrange multipliers produced by many nonlinear solvers.
750 References
751 ----------
752 .. [13] Huangfu, Q., Galabova, I., Feldmeier, M., and Hall, J. A. J.
753 "HiGHS - high performance software for linear optimization."
754 https://highs.dev/
755 .. [14] Huangfu, Q. and Hall, J. A. J. "Parallelizing the dual revised
756 simplex method." Mathematical Programming Computation, 10 (1),
757 119-142, 2018. DOI: 10.1007/s12532-017-0130-5
758 """
759 pass
762def _linprog_ip_doc(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
763 bounds=None, method='interior-point', callback=None,
764 maxiter=1000, disp=False, presolve=True,
765 tol=1e-8, autoscale=False, rr=True,
766 alpha0=.99995, beta=0.1, sparse=False,
767 lstsq=False, sym_pos=True, cholesky=True, pc=True,
768 ip=False, permc_spec='MMD_AT_PLUS_A', **unknown_options):
769 r"""
770 Linear programming: minimize a linear objective function subject to linear
771 equality and inequality constraints using the interior-point method of
772 [4]_.
774 .. deprecated:: 1.9.0
775 `method='interior-point'` will be removed in SciPy 1.11.0.
776 It is replaced by `method='highs'` because the latter is
777 faster and more robust.
779 Linear programming solves problems of the following form:
781 .. math::
783 \min_x \ & c^T x \\
784 \mbox{such that} \ & A_{ub} x \leq b_{ub},\\
785 & A_{eq} x = b_{eq},\\
786 & l \leq x \leq u ,
788 where :math:`x` is a vector of decision variables; :math:`c`,
789 :math:`b_{ub}`, :math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and
790 :math:`A_{ub}` and :math:`A_{eq}` are matrices.
792 Alternatively, that's:
794 minimize::
796 c @ x
798 such that::
800 A_ub @ x <= b_ub
801 A_eq @ x == b_eq
802 lb <= x <= ub
804 Note that by default ``lb = 0`` and ``ub = None`` unless specified with
805 ``bounds``.
807 Parameters
808 ----------
809 c : 1-D array
810 The coefficients of the linear objective function to be minimized.
811 A_ub : 2-D array, optional
812 The inequality constraint matrix. Each row of ``A_ub`` specifies the
813 coefficients of a linear inequality constraint on ``x``.
814 b_ub : 1-D array, optional
815 The inequality constraint vector. Each element represents an
816 upper bound on the corresponding value of ``A_ub @ x``.
817 A_eq : 2-D array, optional
818 The equality constraint matrix. Each row of ``A_eq`` specifies the
819 coefficients of a linear equality constraint on ``x``.
820 b_eq : 1-D array, optional
821 The equality constraint vector. Each element of ``A_eq @ x`` must equal
822 the corresponding element of ``b_eq``.
823 bounds : sequence, optional
824 A sequence of ``(min, max)`` pairs for each element in ``x``, defining
825 the minimum and maximum values of that decision variable. Use ``None``
826 to indicate that there is no bound. By default, bounds are
827 ``(0, None)`` (all decision variables are non-negative).
828 If a single tuple ``(min, max)`` is provided, then ``min`` and
829 ``max`` will serve as bounds for all decision variables.
830 method : str
831 This is the method-specific documentation for 'interior-point'.
832 :ref:`'highs' <optimize.linprog-highs>`,
833 :ref:`'highs-ds' <optimize.linprog-highs-ds>`,
834 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`,
835 :ref:`'revised simplex' <optimize.linprog-revised_simplex>`, and
836 :ref:`'simplex' <optimize.linprog-simplex>` (legacy)
837 are also available.
838 callback : callable, optional
839 Callback function to be executed once per iteration.
841 Options
842 -------
843 maxiter : int (default: 1000)
844 The maximum number of iterations of the algorithm.
845 disp : bool (default: False)
846 Set to ``True`` if indicators of optimization status are to be printed
847 to the console each iteration.
848 presolve : bool (default: True)
849 Presolve attempts to identify trivial infeasibilities,
850 identify trivial unboundedness, and simplify the problem before
851 sending it to the main solver. It is generally recommended
852 to keep the default setting ``True``; set to ``False`` if
853 presolve is to be disabled.
854 tol : float (default: 1e-8)
855 Termination tolerance to be used for all termination criteria;
856 see [4]_ Section 4.5.
857 autoscale : bool (default: False)
858 Set to ``True`` to automatically perform equilibration.
859 Consider using this option if the numerical values in the
860 constraints are separated by several orders of magnitude.
861 rr : bool (default: True)
862 Set to ``False`` to disable automatic redundancy removal.
863 alpha0 : float (default: 0.99995)
864 The maximal step size for Mehrota's predictor-corrector search
865 direction; see :math:`\beta_{3}` of [4]_ Table 8.1.
866 beta : float (default: 0.1)
867 The desired reduction of the path parameter :math:`\mu` (see [6]_)
868 when Mehrota's predictor-corrector is not in use (uncommon).
869 sparse : bool (default: False)
870 Set to ``True`` if the problem is to be treated as sparse after
871 presolve. If either ``A_eq`` or ``A_ub`` is a sparse matrix,
872 this option will automatically be set ``True``, and the problem
873 will be treated as sparse even during presolve. If your constraint
874 matrices contain mostly zeros and the problem is not very small (less
875 than about 100 constraints or variables), consider setting ``True``
876 or providing ``A_eq`` and ``A_ub`` as sparse matrices.
877 lstsq : bool (default: ``False``)
878 Set to ``True`` if the problem is expected to be very poorly
879 conditioned. This should always be left ``False`` unless severe
880 numerical difficulties are encountered. Leave this at the default
881 unless you receive a warning message suggesting otherwise.
882 sym_pos : bool (default: True)
883 Leave ``True`` if the problem is expected to yield a well conditioned
884 symmetric positive definite normal equation matrix
885 (almost always). Leave this at the default unless you receive
886 a warning message suggesting otherwise.
887 cholesky : bool (default: True)
888 Set to ``True`` if the normal equations are to be solved by explicit
889 Cholesky decomposition followed by explicit forward/backward
890 substitution. This is typically faster for problems
891 that are numerically well-behaved.
892 pc : bool (default: True)
893 Leave ``True`` if the predictor-corrector method of Mehrota is to be
894 used. This is almost always (if not always) beneficial.
895 ip : bool (default: False)
896 Set to ``True`` if the improved initial point suggestion due to [4]_
897 Section 4.3 is desired. Whether this is beneficial or not
898 depends on the problem.
899 permc_spec : str (default: 'MMD_AT_PLUS_A')
900 (Has effect only with ``sparse = True``, ``lstsq = False``, ``sym_pos =
901 True``, and no SuiteSparse.)
902 A matrix is factorized in each iteration of the algorithm.
903 This option specifies how to permute the columns of the matrix for
904 sparsity preservation. Acceptable values are:
906 - ``NATURAL``: natural ordering.
907 - ``MMD_ATA``: minimum degree ordering on the structure of A^T A.
908 - ``MMD_AT_PLUS_A``: minimum degree ordering on the structure of A^T+A.
909 - ``COLAMD``: approximate minimum degree column ordering.
911 This option can impact the convergence of the
912 interior point algorithm; test different values to determine which
913 performs best for your problem. For more information, refer to
914 ``scipy.sparse.linalg.splu``.
915 unknown_options : dict
916 Optional arguments not used by this particular solver. If
917 `unknown_options` is non-empty a warning is issued listing all
918 unused options.
920 Returns
921 -------
922 res : OptimizeResult
923 A :class:`scipy.optimize.OptimizeResult` consisting of the fields:
925 x : 1-D array
926 The values of the decision variables that minimizes the
927 objective function while satisfying the constraints.
928 fun : float
929 The optimal value of the objective function ``c @ x``.
930 slack : 1-D array
931 The (nominally positive) values of the slack variables,
932 ``b_ub - A_ub @ x``.
933 con : 1-D array
934 The (nominally zero) residuals of the equality constraints,
935 ``b_eq - A_eq @ x``.
936 success : bool
937 ``True`` when the algorithm succeeds in finding an optimal
938 solution.
939 status : int
940 An integer representing the exit status of the algorithm.
942 ``0`` : Optimization terminated successfully.
944 ``1`` : Iteration limit reached.
946 ``2`` : Problem appears to be infeasible.
948 ``3`` : Problem appears to be unbounded.
950 ``4`` : Numerical difficulties encountered.
952 message : str
953 A string descriptor of the exit status of the algorithm.
954 nit : int
955 The total number of iterations performed in all phases.
958 Notes
959 -----
960 This method implements the algorithm outlined in [4]_ with ideas from [8]_
961 and a structure inspired by the simpler methods of [6]_.
963 The primal-dual path following method begins with initial 'guesses' of
964 the primal and dual variables of the standard form problem and iteratively
965 attempts to solve the (nonlinear) Karush-Kuhn-Tucker conditions for the
966 problem with a gradually reduced logarithmic barrier term added to the
967 objective. This particular implementation uses a homogeneous self-dual
968 formulation, which provides certificates of infeasibility or unboundedness
969 where applicable.
971 The default initial point for the primal and dual variables is that
972 defined in [4]_ Section 4.4 Equation 8.22. Optionally (by setting initial
973 point option ``ip=True``), an alternate (potentially improved) starting
974 point can be calculated according to the additional recommendations of
975 [4]_ Section 4.4.
977 A search direction is calculated using the predictor-corrector method
978 (single correction) proposed by Mehrota and detailed in [4]_ Section 4.1.
979 (A potential improvement would be to implement the method of multiple
980 corrections described in [4]_ Section 4.2.) In practice, this is
981 accomplished by solving the normal equations, [4]_ Section 5.1 Equations
982 8.31 and 8.32, derived from the Newton equations [4]_ Section 5 Equations
983 8.25 (compare to [4]_ Section 4 Equations 8.6-8.8). The advantage of
984 solving the normal equations rather than 8.25 directly is that the
985 matrices involved are symmetric positive definite, so Cholesky
986 decomposition can be used rather than the more expensive LU factorization.
988 With default options, the solver used to perform the factorization depends
989 on third-party software availability and the conditioning of the problem.
991 For dense problems, solvers are tried in the following order:
993 1. ``scipy.linalg.cho_factor``
995 2. ``scipy.linalg.solve`` with option ``sym_pos=True``
997 3. ``scipy.linalg.solve`` with option ``sym_pos=False``
999 4. ``scipy.linalg.lstsq``
1001 For sparse problems:
1003 1. ``sksparse.cholmod.cholesky`` (if scikit-sparse and SuiteSparse are
1004 installed)
1006 2. ``scipy.sparse.linalg.factorized`` (if scikit-umfpack and SuiteSparse
1007 are installed)
1009 3. ``scipy.sparse.linalg.splu`` (which uses SuperLU distributed with SciPy)
1011 4. ``scipy.sparse.linalg.lsqr``
1013 If the solver fails for any reason, successively more robust (but slower)
1014 solvers are attempted in the order indicated. Attempting, failing, and
1015 re-starting factorization can be time consuming, so if the problem is
1016 numerically challenging, options can be set to bypass solvers that are
1017 failing. Setting ``cholesky=False`` skips to solver 2,
1018 ``sym_pos=False`` skips to solver 3, and ``lstsq=True`` skips
1019 to solver 4 for both sparse and dense problems.
1021 Potential improvements for combatting issues associated with dense
1022 columns in otherwise sparse problems are outlined in [4]_ Section 5.3 and
1023 [10]_ Section 4.1-4.2; the latter also discusses the alleviation of
1024 accuracy issues associated with the substitution approach to free
1025 variables.
1027 After calculating the search direction, the maximum possible step size
1028 that does not activate the non-negativity constraints is calculated, and
1029 the smaller of this step size and unity is applied (as in [4]_ Section
1030 4.1.) [4]_ Section 4.3 suggests improvements for choosing the step size.
1032 The new point is tested according to the termination conditions of [4]_
1033 Section 4.5. The same tolerance, which can be set using the ``tol`` option,
1034 is used for all checks. (A potential improvement would be to expose
1035 the different tolerances to be set independently.) If optimality,
1036 unboundedness, or infeasibility is detected, the solve procedure
1037 terminates; otherwise it repeats.
1039 Whereas the top level ``linprog`` module expects a problem of form:
1041 Minimize::
1043 c @ x
1045 Subject to::
1047 A_ub @ x <= b_ub
1048 A_eq @ x == b_eq
1049 lb <= x <= ub
1051 where ``lb = 0`` and ``ub = None`` unless set in ``bounds``. The problem
1052 is automatically converted to the form:
1054 Minimize::
1056 c @ x
1058 Subject to::
1060 A @ x == b
1061 x >= 0
1063 for solution. That is, the original problem contains equality, upper-bound
1064 and variable constraints whereas the method specific solver requires
1065 equality constraints and variable non-negativity. ``linprog`` converts the
1066 original problem to standard form by converting the simple bounds to upper
1067 bound constraints, introducing non-negative slack variables for inequality
1068 constraints, and expressing unbounded variables as the difference between
1069 two non-negative variables. The problem is converted back to the original
1070 form before results are reported.
1072 References
1073 ----------
1074 .. [4] Andersen, Erling D., and Knud D. Andersen. "The MOSEK interior point
1075 optimizer for linear programming: an implementation of the
1076 homogeneous algorithm." High performance optimization. Springer US,
1077 2000. 197-232.
1078 .. [6] Freund, Robert M. "Primal-Dual Interior-Point Methods for Linear
1079 Programming based on Newton's Method." Unpublished Course Notes,
1080 March 2004. Available 2/25/2017 at
1081 https://ocw.mit.edu/courses/sloan-school-of-management/15-084j-nonlinear-programming-spring-2004/lecture-notes/lec14_int_pt_mthd.pdf
1082 .. [8] Andersen, Erling D., and Knud D. Andersen. "Presolving in linear
1083 programming." Mathematical Programming 71.2 (1995): 221-245.
1084 .. [9] Bertsimas, Dimitris, and J. Tsitsiklis. "Introduction to linear
1085 programming." Athena Scientific 1 (1997): 997.
1086 .. [10] Andersen, Erling D., et al. Implementation of interior point
1087 methods for large scale linear programming. HEC/Universite de
1088 Geneve, 1996.
1089 """
1090 pass
1093def _linprog_rs_doc(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
1094 bounds=None, method='interior-point', callback=None,
1095 x0=None, maxiter=5000, disp=False, presolve=True,
1096 tol=1e-12, autoscale=False, rr=True, maxupdate=10,
1097 mast=False, pivot="mrc", **unknown_options):
1098 r"""
1099 Linear programming: minimize a linear objective function subject to linear
1100 equality and inequality constraints using the revised simplex method.
1102 .. deprecated:: 1.9.0
1103 `method='revised simplex'` will be removed in SciPy 1.11.0.
1104 It is replaced by `method='highs'` because the latter is
1105 faster and more robust.
1107 Linear programming solves problems of the following form:
1109 .. math::
1111 \min_x \ & c^T x \\
1112 \mbox{such that} \ & A_{ub} x \leq b_{ub},\\
1113 & A_{eq} x = b_{eq},\\
1114 & l \leq x \leq u ,
1116 where :math:`x` is a vector of decision variables; :math:`c`,
1117 :math:`b_{ub}`, :math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and
1118 :math:`A_{ub}` and :math:`A_{eq}` are matrices.
1120 Alternatively, that's:
1122 minimize::
1124 c @ x
1126 such that::
1128 A_ub @ x <= b_ub
1129 A_eq @ x == b_eq
1130 lb <= x <= ub
1132 Note that by default ``lb = 0`` and ``ub = None`` unless specified with
1133 ``bounds``.
1135 Parameters
1136 ----------
1137 c : 1-D array
1138 The coefficients of the linear objective function to be minimized.
1139 A_ub : 2-D array, optional
1140 The inequality constraint matrix. Each row of ``A_ub`` specifies the
1141 coefficients of a linear inequality constraint on ``x``.
1142 b_ub : 1-D array, optional
1143 The inequality constraint vector. Each element represents an
1144 upper bound on the corresponding value of ``A_ub @ x``.
1145 A_eq : 2-D array, optional
1146 The equality constraint matrix. Each row of ``A_eq`` specifies the
1147 coefficients of a linear equality constraint on ``x``.
1148 b_eq : 1-D array, optional
1149 The equality constraint vector. Each element of ``A_eq @ x`` must equal
1150 the corresponding element of ``b_eq``.
1151 bounds : sequence, optional
1152 A sequence of ``(min, max)`` pairs for each element in ``x``, defining
1153 the minimum and maximum values of that decision variable. Use ``None``
1154 to indicate that there is no bound. By default, bounds are
1155 ``(0, None)`` (all decision variables are non-negative).
1156 If a single tuple ``(min, max)`` is provided, then ``min`` and
1157 ``max`` will serve as bounds for all decision variables.
1158 method : str
1159 This is the method-specific documentation for 'revised simplex'.
1160 :ref:`'highs' <optimize.linprog-highs>`,
1161 :ref:`'highs-ds' <optimize.linprog-highs-ds>`,
1162 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`,
1163 :ref:`'interior-point' <optimize.linprog-interior-point>` (default),
1164 and :ref:`'simplex' <optimize.linprog-simplex>` (legacy)
1165 are also available.
1166 callback : callable, optional
1167 Callback function to be executed once per iteration.
1168 x0 : 1-D array, optional
1169 Guess values of the decision variables, which will be refined by
1170 the optimization algorithm. This argument is currently used only by the
1171 'revised simplex' method, and can only be used if `x0` represents a
1172 basic feasible solution.
1174 Options
1175 -------
1176 maxiter : int (default: 5000)
1177 The maximum number of iterations to perform in either phase.
1178 disp : bool (default: False)
1179 Set to ``True`` if indicators of optimization status are to be printed
1180 to the console each iteration.
1181 presolve : bool (default: True)
1182 Presolve attempts to identify trivial infeasibilities,
1183 identify trivial unboundedness, and simplify the problem before
1184 sending it to the main solver. It is generally recommended
1185 to keep the default setting ``True``; set to ``False`` if
1186 presolve is to be disabled.
1187 tol : float (default: 1e-12)
1188 The tolerance which determines when a solution is "close enough" to
1189 zero in Phase 1 to be considered a basic feasible solution or close
1190 enough to positive to serve as an optimal solution.
1191 autoscale : bool (default: False)
1192 Set to ``True`` to automatically perform equilibration.
1193 Consider using this option if the numerical values in the
1194 constraints are separated by several orders of magnitude.
1195 rr : bool (default: True)
1196 Set to ``False`` to disable automatic redundancy removal.
1197 maxupdate : int (default: 10)
1198 The maximum number of updates performed on the LU factorization.
1199 After this many updates is reached, the basis matrix is factorized
1200 from scratch.
1201 mast : bool (default: False)
1202 Minimize Amortized Solve Time. If enabled, the average time to solve
1203 a linear system using the basis factorization is measured. Typically,
1204 the average solve time will decrease with each successive solve after
1205 initial factorization, as factorization takes much more time than the
1206 solve operation (and updates). Eventually, however, the updated
1207 factorization becomes sufficiently complex that the average solve time
1208 begins to increase. When this is detected, the basis is refactorized
1209 from scratch. Enable this option to maximize speed at the risk of
1210 nondeterministic behavior. Ignored if ``maxupdate`` is 0.
1211 pivot : "mrc" or "bland" (default: "mrc")
1212 Pivot rule: Minimum Reduced Cost ("mrc") or Bland's rule ("bland").
1213 Choose Bland's rule if iteration limit is reached and cycling is
1214 suspected.
1215 unknown_options : dict
1216 Optional arguments not used by this particular solver. If
1217 `unknown_options` is non-empty a warning is issued listing all
1218 unused options.
1220 Returns
1221 -------
1222 res : OptimizeResult
1223 A :class:`scipy.optimize.OptimizeResult` consisting of the fields:
1225 x : 1-D array
1226 The values of the decision variables that minimizes the
1227 objective function while satisfying the constraints.
1228 fun : float
1229 The optimal value of the objective function ``c @ x``.
1230 slack : 1-D array
1231 The (nominally positive) values of the slack variables,
1232 ``b_ub - A_ub @ x``.
1233 con : 1-D array
1234 The (nominally zero) residuals of the equality constraints,
1235 ``b_eq - A_eq @ x``.
1236 success : bool
1237 ``True`` when the algorithm succeeds in finding an optimal
1238 solution.
1239 status : int
1240 An integer representing the exit status of the algorithm.
1242 ``0`` : Optimization terminated successfully.
1244 ``1`` : Iteration limit reached.
1246 ``2`` : Problem appears to be infeasible.
1248 ``3`` : Problem appears to be unbounded.
1250 ``4`` : Numerical difficulties encountered.
1252 ``5`` : Problem has no constraints; turn presolve on.
1254 ``6`` : Invalid guess provided.
1256 message : str
1257 A string descriptor of the exit status of the algorithm.
1258 nit : int
1259 The total number of iterations performed in all phases.
1262 Notes
1263 -----
1264 Method *revised simplex* uses the revised simplex method as described in
1265 [9]_, except that a factorization [11]_ of the basis matrix, rather than
1266 its inverse, is efficiently maintained and used to solve the linear systems
1267 at each iteration of the algorithm.
1269 References
1270 ----------
1271 .. [9] Bertsimas, Dimitris, and J. Tsitsiklis. "Introduction to linear
1272 programming." Athena Scientific 1 (1997): 997.
1273 .. [11] Bartels, Richard H. "A stabilization of the simplex method."
1274 Journal in Numerische Mathematik 16.5 (1971): 414-434.
1275 """
1276 pass
1279def _linprog_simplex_doc(c, A_ub=None, b_ub=None, A_eq=None, b_eq=None,
1280 bounds=None, method='interior-point', callback=None,
1281 maxiter=5000, disp=False, presolve=True,
1282 tol=1e-12, autoscale=False, rr=True, bland=False,
1283 **unknown_options):
1284 r"""
1285 Linear programming: minimize a linear objective function subject to linear
1286 equality and inequality constraints using the tableau-based simplex method.
1288 .. deprecated:: 1.9.0
1289 `method='simplex'` will be removed in SciPy 1.11.0.
1290 It is replaced by `method='highs'` because the latter is
1291 faster and more robust.
1293 Linear programming solves problems of the following form:
1295 .. math::
1297 \min_x \ & c^T x \\
1298 \mbox{such that} \ & A_{ub} x \leq b_{ub},\\
1299 & A_{eq} x = b_{eq},\\
1300 & l \leq x \leq u ,
1302 where :math:`x` is a vector of decision variables; :math:`c`,
1303 :math:`b_{ub}`, :math:`b_{eq}`, :math:`l`, and :math:`u` are vectors; and
1304 :math:`A_{ub}` and :math:`A_{eq}` are matrices.
1306 Alternatively, that's:
1308 minimize::
1310 c @ x
1312 such that::
1314 A_ub @ x <= b_ub
1315 A_eq @ x == b_eq
1316 lb <= x <= ub
1318 Note that by default ``lb = 0`` and ``ub = None`` unless specified with
1319 ``bounds``.
1321 Parameters
1322 ----------
1323 c : 1-D array
1324 The coefficients of the linear objective function to be minimized.
1325 A_ub : 2-D array, optional
1326 The inequality constraint matrix. Each row of ``A_ub`` specifies the
1327 coefficients of a linear inequality constraint on ``x``.
1328 b_ub : 1-D array, optional
1329 The inequality constraint vector. Each element represents an
1330 upper bound on the corresponding value of ``A_ub @ x``.
1331 A_eq : 2-D array, optional
1332 The equality constraint matrix. Each row of ``A_eq`` specifies the
1333 coefficients of a linear equality constraint on ``x``.
1334 b_eq : 1-D array, optional
1335 The equality constraint vector. Each element of ``A_eq @ x`` must equal
1336 the corresponding element of ``b_eq``.
1337 bounds : sequence, optional
1338 A sequence of ``(min, max)`` pairs for each element in ``x``, defining
1339 the minimum and maximum values of that decision variable. Use ``None``
1340 to indicate that there is no bound. By default, bounds are
1341 ``(0, None)`` (all decision variables are non-negative).
1342 If a single tuple ``(min, max)`` is provided, then ``min`` and
1343 ``max`` will serve as bounds for all decision variables.
1344 method : str
1345 This is the method-specific documentation for 'simplex'.
1346 :ref:`'highs' <optimize.linprog-highs>`,
1347 :ref:`'highs-ds' <optimize.linprog-highs-ds>`,
1348 :ref:`'highs-ipm' <optimize.linprog-highs-ipm>`,
1349 :ref:`'interior-point' <optimize.linprog-interior-point>` (default),
1350 and :ref:`'revised simplex' <optimize.linprog-revised_simplex>`
1351 are also available.
1352 callback : callable, optional
1353 Callback function to be executed once per iteration.
1355 Options
1356 -------
1357 maxiter : int (default: 5000)
1358 The maximum number of iterations to perform in either phase.
1359 disp : bool (default: False)
1360 Set to ``True`` if indicators of optimization status are to be printed
1361 to the console each iteration.
1362 presolve : bool (default: True)
1363 Presolve attempts to identify trivial infeasibilities,
1364 identify trivial unboundedness, and simplify the problem before
1365 sending it to the main solver. It is generally recommended
1366 to keep the default setting ``True``; set to ``False`` if
1367 presolve is to be disabled.
1368 tol : float (default: 1e-12)
1369 The tolerance which determines when a solution is "close enough" to
1370 zero in Phase 1 to be considered a basic feasible solution or close
1371 enough to positive to serve as an optimal solution.
1372 autoscale : bool (default: False)
1373 Set to ``True`` to automatically perform equilibration.
1374 Consider using this option if the numerical values in the
1375 constraints are separated by several orders of magnitude.
1376 rr : bool (default: True)
1377 Set to ``False`` to disable automatic redundancy removal.
1378 bland : bool
1379 If True, use Bland's anti-cycling rule [3]_ to choose pivots to
1380 prevent cycling. If False, choose pivots which should lead to a
1381 converged solution more quickly. The latter method is subject to
1382 cycling (non-convergence) in rare instances.
1383 unknown_options : dict
1384 Optional arguments not used by this particular solver. If
1385 `unknown_options` is non-empty a warning is issued listing all
1386 unused options.
1388 Returns
1389 -------
1390 res : OptimizeResult
1391 A :class:`scipy.optimize.OptimizeResult` consisting of the fields:
1393 x : 1-D array
1394 The values of the decision variables that minimizes the
1395 objective function while satisfying the constraints.
1396 fun : float
1397 The optimal value of the objective function ``c @ x``.
1398 slack : 1-D array
1399 The (nominally positive) values of the slack variables,
1400 ``b_ub - A_ub @ x``.
1401 con : 1-D array
1402 The (nominally zero) residuals of the equality constraints,
1403 ``b_eq - A_eq @ x``.
1404 success : bool
1405 ``True`` when the algorithm succeeds in finding an optimal
1406 solution.
1407 status : int
1408 An integer representing the exit status of the algorithm.
1410 ``0`` : Optimization terminated successfully.
1412 ``1`` : Iteration limit reached.
1414 ``2`` : Problem appears to be infeasible.
1416 ``3`` : Problem appears to be unbounded.
1418 ``4`` : Numerical difficulties encountered.
1420 message : str
1421 A string descriptor of the exit status of the algorithm.
1422 nit : int
1423 The total number of iterations performed in all phases.
1425 References
1426 ----------
1427 .. [1] Dantzig, George B., Linear programming and extensions. Rand
1428 Corporation Research Study Princeton Univ. Press, Princeton, NJ,
1429 1963
1430 .. [2] Hillier, S.H. and Lieberman, G.J. (1995), "Introduction to
1431 Mathematical Programming", McGraw-Hill, Chapter 4.
1432 .. [3] Bland, Robert G. New finite pivoting rules for the simplex method.
1433 Mathematics of Operations Research (2), 1977: pp. 103-107.
1434 """
1435 pass