Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.8/site-packages/scipy/optimize/_root.py: 22%

88 statements  

« prev     ^ index     » next       coverage.py v7.3.2, created at 2023-12-12 06:31 +0000

1""" 

2Unified interfaces to root finding algorithms. 

3 

4Functions 

5--------- 

6- root : find a root of a vector function. 

7""" 

8__all__ = ['root'] 

9 

10import numpy as np 

11 

12ROOT_METHODS = ['hybr', 'lm', 'broyden1', 'broyden2', 'anderson', 

13 'linearmixing', 'diagbroyden', 'excitingmixing', 'krylov', 

14 'df-sane'] 

15 

16from warnings import warn 

17 

18from ._optimize import MemoizeJac, OptimizeResult, _check_unknown_options 

19from ._minpack_py import _root_hybr, leastsq 

20from ._spectral import _root_df_sane 

21from . import _nonlin as nonlin 

22 

23 

24def root(fun, x0, args=(), method='hybr', jac=None, tol=None, callback=None, 

25 options=None): 

26 r""" 

27 Find a root of a vector function. 

28 

29 Parameters 

30 ---------- 

31 fun : callable 

32 A vector function to find a root of. 

33 x0 : ndarray 

34 Initial guess. 

35 args : tuple, optional 

36 Extra arguments passed to the objective function and its Jacobian. 

37 method : str, optional 

38 Type of solver. Should be one of 

39 

40 - 'hybr' :ref:`(see here) <optimize.root-hybr>` 

41 - 'lm' :ref:`(see here) <optimize.root-lm>` 

42 - 'broyden1' :ref:`(see here) <optimize.root-broyden1>` 

43 - 'broyden2' :ref:`(see here) <optimize.root-broyden2>` 

44 - 'anderson' :ref:`(see here) <optimize.root-anderson>` 

45 - 'linearmixing' :ref:`(see here) <optimize.root-linearmixing>` 

46 - 'diagbroyden' :ref:`(see here) <optimize.root-diagbroyden>` 

47 - 'excitingmixing' :ref:`(see here) <optimize.root-excitingmixing>` 

48 - 'krylov' :ref:`(see here) <optimize.root-krylov>` 

49 - 'df-sane' :ref:`(see here) <optimize.root-dfsane>` 

50 

51 jac : bool or callable, optional 

52 If `jac` is a Boolean and is True, `fun` is assumed to return the 

53 value of Jacobian along with the objective function. If False, the 

54 Jacobian will be estimated numerically. 

55 `jac` can also be a callable returning the Jacobian of `fun`. In 

56 this case, it must accept the same arguments as `fun`. 

57 tol : float, optional 

58 Tolerance for termination. For detailed control, use solver-specific 

59 options. 

60 callback : function, optional 

61 Optional callback function. It is called on every iteration as 

62 ``callback(x, f)`` where `x` is the current solution and `f` 

63 the corresponding residual. For all methods but 'hybr' and 'lm'. 

64 options : dict, optional 

65 A dictionary of solver options. E.g., `xtol` or `maxiter`, see 

66 :obj:`show_options()` for details. 

67 

68 Returns 

69 ------- 

70 sol : OptimizeResult 

71 The solution represented as a ``OptimizeResult`` object. 

72 Important attributes are: ``x`` the solution array, ``success`` a 

73 Boolean flag indicating if the algorithm exited successfully and 

74 ``message`` which describes the cause of the termination. See 

75 `OptimizeResult` for a description of other attributes. 

76 

77 See also 

78 -------- 

79 show_options : Additional options accepted by the solvers 

80 

81 Notes 

82 ----- 

83 This section describes the available solvers that can be selected by the 

84 'method' parameter. The default method is *hybr*. 

85 

86 Method *hybr* uses a modification of the Powell hybrid method as 

87 implemented in MINPACK [1]_. 

88 

89 Method *lm* solves the system of nonlinear equations in a least squares 

90 sense using a modification of the Levenberg-Marquardt algorithm as 

91 implemented in MINPACK [1]_. 

92 

93 Method *df-sane* is a derivative-free spectral method. [3]_ 

94 

95 Methods *broyden1*, *broyden2*, *anderson*, *linearmixing*, 

96 *diagbroyden*, *excitingmixing*, *krylov* are inexact Newton methods, 

97 with backtracking or full line searches [2]_. Each method corresponds 

98 to a particular Jacobian approximations. 

99 

100 - Method *broyden1* uses Broyden's first Jacobian approximation, it is 

101 known as Broyden's good method. 

102 - Method *broyden2* uses Broyden's second Jacobian approximation, it 

103 is known as Broyden's bad method. 

104 - Method *anderson* uses (extended) Anderson mixing. 

105 - Method *Krylov* uses Krylov approximation for inverse Jacobian. It 

106 is suitable for large-scale problem. 

107 - Method *diagbroyden* uses diagonal Broyden Jacobian approximation. 

108 - Method *linearmixing* uses a scalar Jacobian approximation. 

109 - Method *excitingmixing* uses a tuned diagonal Jacobian 

110 approximation. 

111 

112 .. warning:: 

113 

114 The algorithms implemented for methods *diagbroyden*, 

115 *linearmixing* and *excitingmixing* may be useful for specific 

116 problems, but whether they will work may depend strongly on the 

117 problem. 

118 

119 .. versionadded:: 0.11.0 

120 

121 References 

122 ---------- 

123 .. [1] More, Jorge J., Burton S. Garbow, and Kenneth E. Hillstrom. 

124 1980. User Guide for MINPACK-1. 

125 .. [2] C. T. Kelley. 1995. Iterative Methods for Linear and Nonlinear 

126 Equations. Society for Industrial and Applied Mathematics. 

127 <https://archive.siam.org/books/kelley/fr16/> 

128 .. [3] W. La Cruz, J.M. Martinez, M. Raydan. Math. Comp. 75, 1429 (2006). 

129 

130 Examples 

131 -------- 

132 The following functions define a system of nonlinear equations and its 

133 jacobian. 

134 

135 >>> import numpy as np 

136 >>> def fun(x): 

137 ... return [x[0] + 0.5 * (x[0] - x[1])**3 - 1.0, 

138 ... 0.5 * (x[1] - x[0])**3 + x[1]] 

139 

140 >>> def jac(x): 

141 ... return np.array([[1 + 1.5 * (x[0] - x[1])**2, 

142 ... -1.5 * (x[0] - x[1])**2], 

143 ... [-1.5 * (x[1] - x[0])**2, 

144 ... 1 + 1.5 * (x[1] - x[0])**2]]) 

145 

146 A solution can be obtained as follows. 

147 

148 >>> from scipy import optimize 

149 >>> sol = optimize.root(fun, [0, 0], jac=jac, method='hybr') 

150 >>> sol.x 

151 array([ 0.8411639, 0.1588361]) 

152 

153 **Large problem** 

154 

155 Suppose that we needed to solve the following integrodifferential 

156 equation on the square :math:`[0,1]\times[0,1]`: 

157 

158 .. math:: 

159 

160 \nabla^2 P = 10 \left(\int_0^1\int_0^1\cosh(P)\,dx\,dy\right)^2 

161 

162 with :math:`P(x,1) = 1` and :math:`P=0` elsewhere on the boundary of 

163 the square. 

164 

165 The solution can be found using the ``method='krylov'`` solver: 

166 

167 >>> from scipy import optimize 

168 >>> # parameters 

169 >>> nx, ny = 75, 75 

170 >>> hx, hy = 1./(nx-1), 1./(ny-1) 

171 

172 >>> P_left, P_right = 0, 0 

173 >>> P_top, P_bottom = 1, 0 

174 

175 >>> def residual(P): 

176 ... d2x = np.zeros_like(P) 

177 ... d2y = np.zeros_like(P) 

178 ... 

179 ... d2x[1:-1] = (P[2:] - 2*P[1:-1] + P[:-2]) / hx/hx 

180 ... d2x[0] = (P[1] - 2*P[0] + P_left)/hx/hx 

181 ... d2x[-1] = (P_right - 2*P[-1] + P[-2])/hx/hx 

182 ... 

183 ... d2y[:,1:-1] = (P[:,2:] - 2*P[:,1:-1] + P[:,:-2])/hy/hy 

184 ... d2y[:,0] = (P[:,1] - 2*P[:,0] + P_bottom)/hy/hy 

185 ... d2y[:,-1] = (P_top - 2*P[:,-1] + P[:,-2])/hy/hy 

186 ... 

187 ... return d2x + d2y - 10*np.cosh(P).mean()**2 

188 

189 >>> guess = np.zeros((nx, ny), float) 

190 >>> sol = optimize.root(residual, guess, method='krylov') 

191 >>> print('Residual: %g' % abs(residual(sol.x)).max()) 

192 Residual: 5.7972e-06 # may vary 

193 

194 >>> import matplotlib.pyplot as plt 

195 >>> x, y = np.mgrid[0:1:(nx*1j), 0:1:(ny*1j)] 

196 >>> plt.pcolormesh(x, y, sol.x, shading='gouraud') 

197 >>> plt.colorbar() 

198 >>> plt.show() 

199 

200 """ 

201 if not isinstance(args, tuple): 

202 args = (args,) 

203 

204 meth = method.lower() 

205 if options is None: 

206 options = {} 

207 

208 if callback is not None and meth in ('hybr', 'lm'): 

209 warn('Method %s does not accept callback.' % method, 

210 RuntimeWarning) 

211 

212 # fun also returns the Jacobian 

213 if not callable(jac) and meth in ('hybr', 'lm'): 

214 if bool(jac): 

215 fun = MemoizeJac(fun) 

216 jac = fun.derivative 

217 else: 

218 jac = None 

219 

220 # set default tolerances 

221 if tol is not None: 

222 options = dict(options) 

223 if meth in ('hybr', 'lm'): 

224 options.setdefault('xtol', tol) 

225 elif meth in ('df-sane',): 

226 options.setdefault('ftol', tol) 

227 elif meth in ('broyden1', 'broyden2', 'anderson', 'linearmixing', 

228 'diagbroyden', 'excitingmixing', 'krylov'): 

229 options.setdefault('xtol', tol) 

230 options.setdefault('xatol', np.inf) 

231 options.setdefault('ftol', np.inf) 

232 options.setdefault('fatol', np.inf) 

233 

234 if meth == 'hybr': 

235 sol = _root_hybr(fun, x0, args=args, jac=jac, **options) 

236 elif meth == 'lm': 

237 sol = _root_leastsq(fun, x0, args=args, jac=jac, **options) 

238 elif meth == 'df-sane': 

239 _warn_jac_unused(jac, method) 

240 sol = _root_df_sane(fun, x0, args=args, callback=callback, 

241 **options) 

242 elif meth in ('broyden1', 'broyden2', 'anderson', 'linearmixing', 

243 'diagbroyden', 'excitingmixing', 'krylov'): 

244 _warn_jac_unused(jac, method) 

245 sol = _root_nonlin_solve(fun, x0, args=args, jac=jac, 

246 _method=meth, _callback=callback, 

247 **options) 

248 else: 

249 raise ValueError('Unknown solver %s' % method) 

250 

251 return sol 

252 

253 

254def _warn_jac_unused(jac, method): 

255 if jac is not None: 

256 warn('Method %s does not use the jacobian (jac).' % (method,), 

257 RuntimeWarning) 

258 

259 

260def _root_leastsq(fun, x0, args=(), jac=None, 

261 col_deriv=0, xtol=1.49012e-08, ftol=1.49012e-08, 

262 gtol=0.0, maxiter=0, eps=0.0, factor=100, diag=None, 

263 **unknown_options): 

264 """ 

265 Solve for least squares with Levenberg-Marquardt 

266 

267 Options 

268 ------- 

269 col_deriv : bool 

270 non-zero to specify that the Jacobian function computes derivatives 

271 down the columns (faster, because there is no transpose operation). 

272 ftol : float 

273 Relative error desired in the sum of squares. 

274 xtol : float 

275 Relative error desired in the approximate solution. 

276 gtol : float 

277 Orthogonality desired between the function vector and the columns 

278 of the Jacobian. 

279 maxiter : int 

280 The maximum number of calls to the function. If zero, then 

281 100*(N+1) is the maximum where N is the number of elements in x0. 

282 epsfcn : float 

283 A suitable step length for the forward-difference approximation of 

284 the Jacobian (for Dfun=None). If epsfcn is less than the machine 

285 precision, it is assumed that the relative errors in the functions 

286 are of the order of the machine precision. 

287 factor : float 

288 A parameter determining the initial step bound 

289 (``factor * || diag * x||``). Should be in interval ``(0.1, 100)``. 

290 diag : sequence 

291 N positive entries that serve as a scale factors for the variables. 

292 """ 

293 

294 _check_unknown_options(unknown_options) 

295 x, cov_x, info, msg, ier = leastsq(fun, x0, args=args, Dfun=jac, 

296 full_output=True, 

297 col_deriv=col_deriv, xtol=xtol, 

298 ftol=ftol, gtol=gtol, 

299 maxfev=maxiter, epsfcn=eps, 

300 factor=factor, diag=diag) 

301 sol = OptimizeResult(x=x, message=msg, status=ier, 

302 success=ier in (1, 2, 3, 4), cov_x=cov_x, 

303 fun=info.pop('fvec')) 

304 sol.update(info) 

305 return sol 

306 

307 

308def _root_nonlin_solve(fun, x0, args=(), jac=None, 

309 _callback=None, _method=None, 

310 nit=None, disp=False, maxiter=None, 

311 ftol=None, fatol=None, xtol=None, xatol=None, 

312 tol_norm=None, line_search='armijo', jac_options=None, 

313 **unknown_options): 

314 _check_unknown_options(unknown_options) 

315 

316 f_tol = fatol 

317 f_rtol = ftol 

318 x_tol = xatol 

319 x_rtol = xtol 

320 verbose = disp 

321 if jac_options is None: 

322 jac_options = dict() 

323 

324 jacobian = {'broyden1': nonlin.BroydenFirst, 

325 'broyden2': nonlin.BroydenSecond, 

326 'anderson': nonlin.Anderson, 

327 'linearmixing': nonlin.LinearMixing, 

328 'diagbroyden': nonlin.DiagBroyden, 

329 'excitingmixing': nonlin.ExcitingMixing, 

330 'krylov': nonlin.KrylovJacobian 

331 }[_method] 

332 

333 if args: 

334 if jac: 

335 def f(x): 

336 return fun(x, *args)[0] 

337 else: 

338 def f(x): 

339 return fun(x, *args) 

340 else: 

341 f = fun 

342 

343 x, info = nonlin.nonlin_solve(f, x0, jacobian=jacobian(**jac_options), 

344 iter=nit, verbose=verbose, 

345 maxiter=maxiter, f_tol=f_tol, 

346 f_rtol=f_rtol, x_tol=x_tol, 

347 x_rtol=x_rtol, tol_norm=tol_norm, 

348 line_search=line_search, 

349 callback=_callback, full_output=True, 

350 raise_exception=False) 

351 sol = OptimizeResult(x=x) 

352 sol.update(info) 

353 return sol 

354 

355def _root_broyden1_doc(): 

356 """ 

357 Options 

358 ------- 

359 nit : int, optional 

360 Number of iterations to make. If omitted (default), make as many 

361 as required to meet tolerances. 

362 disp : bool, optional 

363 Print status to stdout on every iteration. 

364 maxiter : int, optional 

365 Maximum number of iterations to make. If more are needed to 

366 meet convergence, `NoConvergence` is raised. 

367 ftol : float, optional 

368 Relative tolerance for the residual. If omitted, not used. 

369 fatol : float, optional 

370 Absolute tolerance (in max-norm) for the residual. 

371 If omitted, default is 6e-6. 

372 xtol : float, optional 

373 Relative minimum step size. If omitted, not used. 

374 xatol : float, optional 

375 Absolute minimum step size, as determined from the Jacobian 

376 approximation. If the step size is smaller than this, optimization 

377 is terminated as successful. If omitted, not used. 

378 tol_norm : function(vector) -> scalar, optional 

379 Norm to use in convergence check. Default is the maximum norm. 

380 line_search : {None, 'armijo' (default), 'wolfe'}, optional 

381 Which type of a line search to use to determine the step size in 

382 the direction given by the Jacobian approximation. Defaults to 

383 'armijo'. 

384 jac_options : dict, optional 

385 Options for the respective Jacobian approximation. 

386 alpha : float, optional 

387 Initial guess for the Jacobian is (-1/alpha). 

388 reduction_method : str or tuple, optional 

389 Method used in ensuring that the rank of the Broyden 

390 matrix stays low. Can either be a string giving the 

391 name of the method, or a tuple of the form ``(method, 

392 param1, param2, ...)`` that gives the name of the 

393 method and values for additional parameters. 

394 

395 Methods available: 

396 

397 - ``restart`` 

398 Drop all matrix columns. Has no 

399 extra parameters. 

400 - ``simple`` 

401 Drop oldest matrix column. Has no 

402 extra parameters. 

403 - ``svd`` 

404 Keep only the most significant SVD 

405 components. 

406 

407 Extra parameters: 

408 

409 - ``to_retain`` 

410 Number of SVD components to 

411 retain when rank reduction is done. 

412 Default is ``max_rank - 2``. 

413 max_rank : int, optional 

414 Maximum rank for the Broyden matrix. 

415 Default is infinity (i.e., no rank reduction). 

416 

417 Examples 

418 -------- 

419 >>> def func(x): 

420 ... return np.cos(x) + x[::-1] - [1, 2, 3, 4] 

421 ... 

422 >>> from scipy import optimize 

423 >>> res = optimize.root(func, [1, 1, 1, 1], method='broyden1', tol=1e-14) 

424 >>> x = res.x 

425 >>> x 

426 array([4.04674914, 3.91158389, 2.71791677, 1.61756251]) 

427 >>> np.cos(x) + x[::-1] 

428 array([1., 2., 3., 4.]) 

429 

430 """ 

431 pass 

432 

433def _root_broyden2_doc(): 

434 """ 

435 Options 

436 ------- 

437 nit : int, optional 

438 Number of iterations to make. If omitted (default), make as many 

439 as required to meet tolerances. 

440 disp : bool, optional 

441 Print status to stdout on every iteration. 

442 maxiter : int, optional 

443 Maximum number of iterations to make. If more are needed to 

444 meet convergence, `NoConvergence` is raised. 

445 ftol : float, optional 

446 Relative tolerance for the residual. If omitted, not used. 

447 fatol : float, optional 

448 Absolute tolerance (in max-norm) for the residual. 

449 If omitted, default is 6e-6. 

450 xtol : float, optional 

451 Relative minimum step size. If omitted, not used. 

452 xatol : float, optional 

453 Absolute minimum step size, as determined from the Jacobian 

454 approximation. If the step size is smaller than this, optimization 

455 is terminated as successful. If omitted, not used. 

456 tol_norm : function(vector) -> scalar, optional 

457 Norm to use in convergence check. Default is the maximum norm. 

458 line_search : {None, 'armijo' (default), 'wolfe'}, optional 

459 Which type of a line search to use to determine the step size in 

460 the direction given by the Jacobian approximation. Defaults to 

461 'armijo'. 

462 jac_options : dict, optional 

463 Options for the respective Jacobian approximation. 

464 

465 alpha : float, optional 

466 Initial guess for the Jacobian is (-1/alpha). 

467 reduction_method : str or tuple, optional 

468 Method used in ensuring that the rank of the Broyden 

469 matrix stays low. Can either be a string giving the 

470 name of the method, or a tuple of the form ``(method, 

471 param1, param2, ...)`` that gives the name of the 

472 method and values for additional parameters. 

473 

474 Methods available: 

475 

476 - ``restart`` 

477 Drop all matrix columns. Has no 

478 extra parameters. 

479 - ``simple`` 

480 Drop oldest matrix column. Has no 

481 extra parameters. 

482 - ``svd`` 

483 Keep only the most significant SVD 

484 components. 

485 

486 Extra parameters: 

487 

488 - ``to_retain`` 

489 Number of SVD components to 

490 retain when rank reduction is done. 

491 Default is ``max_rank - 2``. 

492 max_rank : int, optional 

493 Maximum rank for the Broyden matrix. 

494 Default is infinity (i.e., no rank reduction). 

495 """ 

496 pass 

497 

498def _root_anderson_doc(): 

499 """ 

500 Options 

501 ------- 

502 nit : int, optional 

503 Number of iterations to make. If omitted (default), make as many 

504 as required to meet tolerances. 

505 disp : bool, optional 

506 Print status to stdout on every iteration. 

507 maxiter : int, optional 

508 Maximum number of iterations to make. If more are needed to 

509 meet convergence, `NoConvergence` is raised. 

510 ftol : float, optional 

511 Relative tolerance for the residual. If omitted, not used. 

512 fatol : float, optional 

513 Absolute tolerance (in max-norm) for the residual. 

514 If omitted, default is 6e-6. 

515 xtol : float, optional 

516 Relative minimum step size. If omitted, not used. 

517 xatol : float, optional 

518 Absolute minimum step size, as determined from the Jacobian 

519 approximation. If the step size is smaller than this, optimization 

520 is terminated as successful. If omitted, not used. 

521 tol_norm : function(vector) -> scalar, optional 

522 Norm to use in convergence check. Default is the maximum norm. 

523 line_search : {None, 'armijo' (default), 'wolfe'}, optional 

524 Which type of a line search to use to determine the step size in 

525 the direction given by the Jacobian approximation. Defaults to 

526 'armijo'. 

527 jac_options : dict, optional 

528 Options for the respective Jacobian approximation. 

529 

530 alpha : float, optional 

531 Initial guess for the Jacobian is (-1/alpha). 

532 M : float, optional 

533 Number of previous vectors to retain. Defaults to 5. 

534 w0 : float, optional 

535 Regularization parameter for numerical stability. 

536 Compared to unity, good values of the order of 0.01. 

537 """ 

538 pass 

539 

540def _root_linearmixing_doc(): 

541 """ 

542 Options 

543 ------- 

544 nit : int, optional 

545 Number of iterations to make. If omitted (default), make as many 

546 as required to meet tolerances. 

547 disp : bool, optional 

548 Print status to stdout on every iteration. 

549 maxiter : int, optional 

550 Maximum number of iterations to make. If more are needed to 

551 meet convergence, ``NoConvergence`` is raised. 

552 ftol : float, optional 

553 Relative tolerance for the residual. If omitted, not used. 

554 fatol : float, optional 

555 Absolute tolerance (in max-norm) for the residual. 

556 If omitted, default is 6e-6. 

557 xtol : float, optional 

558 Relative minimum step size. If omitted, not used. 

559 xatol : float, optional 

560 Absolute minimum step size, as determined from the Jacobian 

561 approximation. If the step size is smaller than this, optimization 

562 is terminated as successful. If omitted, not used. 

563 tol_norm : function(vector) -> scalar, optional 

564 Norm to use in convergence check. Default is the maximum norm. 

565 line_search : {None, 'armijo' (default), 'wolfe'}, optional 

566 Which type of a line search to use to determine the step size in 

567 the direction given by the Jacobian approximation. Defaults to 

568 'armijo'. 

569 jac_options : dict, optional 

570 Options for the respective Jacobian approximation. 

571 

572 alpha : float, optional 

573 initial guess for the jacobian is (-1/alpha). 

574 """ 

575 pass 

576 

577def _root_diagbroyden_doc(): 

578 """ 

579 Options 

580 ------- 

581 nit : int, optional 

582 Number of iterations to make. If omitted (default), make as many 

583 as required to meet tolerances. 

584 disp : bool, optional 

585 Print status to stdout on every iteration. 

586 maxiter : int, optional 

587 Maximum number of iterations to make. If more are needed to 

588 meet convergence, `NoConvergence` is raised. 

589 ftol : float, optional 

590 Relative tolerance for the residual. If omitted, not used. 

591 fatol : float, optional 

592 Absolute tolerance (in max-norm) for the residual. 

593 If omitted, default is 6e-6. 

594 xtol : float, optional 

595 Relative minimum step size. If omitted, not used. 

596 xatol : float, optional 

597 Absolute minimum step size, as determined from the Jacobian 

598 approximation. If the step size is smaller than this, optimization 

599 is terminated as successful. If omitted, not used. 

600 tol_norm : function(vector) -> scalar, optional 

601 Norm to use in convergence check. Default is the maximum norm. 

602 line_search : {None, 'armijo' (default), 'wolfe'}, optional 

603 Which type of a line search to use to determine the step size in 

604 the direction given by the Jacobian approximation. Defaults to 

605 'armijo'. 

606 jac_options : dict, optional 

607 Options for the respective Jacobian approximation. 

608 

609 alpha : float, optional 

610 initial guess for the jacobian is (-1/alpha). 

611 """ 

612 pass 

613 

614def _root_excitingmixing_doc(): 

615 """ 

616 Options 

617 ------- 

618 nit : int, optional 

619 Number of iterations to make. If omitted (default), make as many 

620 as required to meet tolerances. 

621 disp : bool, optional 

622 Print status to stdout on every iteration. 

623 maxiter : int, optional 

624 Maximum number of iterations to make. If more are needed to 

625 meet convergence, `NoConvergence` is raised. 

626 ftol : float, optional 

627 Relative tolerance for the residual. If omitted, not used. 

628 fatol : float, optional 

629 Absolute tolerance (in max-norm) for the residual. 

630 If omitted, default is 6e-6. 

631 xtol : float, optional 

632 Relative minimum step size. If omitted, not used. 

633 xatol : float, optional 

634 Absolute minimum step size, as determined from the Jacobian 

635 approximation. If the step size is smaller than this, optimization 

636 is terminated as successful. If omitted, not used. 

637 tol_norm : function(vector) -> scalar, optional 

638 Norm to use in convergence check. Default is the maximum norm. 

639 line_search : {None, 'armijo' (default), 'wolfe'}, optional 

640 Which type of a line search to use to determine the step size in 

641 the direction given by the Jacobian approximation. Defaults to 

642 'armijo'. 

643 jac_options : dict, optional 

644 Options for the respective Jacobian approximation. 

645 

646 alpha : float, optional 

647 Initial Jacobian approximation is (-1/alpha). 

648 alphamax : float, optional 

649 The entries of the diagonal Jacobian are kept in the range 

650 ``[alpha, alphamax]``. 

651 """ 

652 pass 

653 

654def _root_krylov_doc(): 

655 """ 

656 Options 

657 ------- 

658 nit : int, optional 

659 Number of iterations to make. If omitted (default), make as many 

660 as required to meet tolerances. 

661 disp : bool, optional 

662 Print status to stdout on every iteration. 

663 maxiter : int, optional 

664 Maximum number of iterations to make. If more are needed to 

665 meet convergence, `NoConvergence` is raised. 

666 ftol : float, optional 

667 Relative tolerance for the residual. If omitted, not used. 

668 fatol : float, optional 

669 Absolute tolerance (in max-norm) for the residual. 

670 If omitted, default is 6e-6. 

671 xtol : float, optional 

672 Relative minimum step size. If omitted, not used. 

673 xatol : float, optional 

674 Absolute minimum step size, as determined from the Jacobian 

675 approximation. If the step size is smaller than this, optimization 

676 is terminated as successful. If omitted, not used. 

677 tol_norm : function(vector) -> scalar, optional 

678 Norm to use in convergence check. Default is the maximum norm. 

679 line_search : {None, 'armijo' (default), 'wolfe'}, optional 

680 Which type of a line search to use to determine the step size in 

681 the direction given by the Jacobian approximation. Defaults to 

682 'armijo'. 

683 jac_options : dict, optional 

684 Options for the respective Jacobian approximation. 

685 

686 rdiff : float, optional 

687 Relative step size to use in numerical differentiation. 

688 method : str or callable, optional 

689 Krylov method to use to approximate the Jacobian. Can be a string, 

690 or a function implementing the same interface as the iterative 

691 solvers in `scipy.sparse.linalg`. If a string, needs to be one of: 

692 ``'lgmres'``, ``'gmres'``, ``'bicgstab'``, ``'cgs'``, ``'minres'``, 

693 ``'tfqmr'``. 

694 

695 The default is `scipy.sparse.linalg.lgmres`. 

696 inner_M : LinearOperator or InverseJacobian 

697 Preconditioner for the inner Krylov iteration. 

698 Note that you can use also inverse Jacobians as (adaptive) 

699 preconditioners. For example, 

700 

701 >>> jac = BroydenFirst() 

702 >>> kjac = KrylovJacobian(inner_M=jac.inverse). 

703 

704 If the preconditioner has a method named 'update', it will 

705 be called as ``update(x, f)`` after each nonlinear step, 

706 with ``x`` giving the current point, and ``f`` the current 

707 function value. 

708 inner_tol, inner_maxiter, ... 

709 Parameters to pass on to the "inner" Krylov solver. 

710 See `scipy.sparse.linalg.gmres` for details. 

711 outer_k : int, optional 

712 Size of the subspace kept across LGMRES nonlinear 

713 iterations. 

714 

715 See `scipy.sparse.linalg.lgmres` for details. 

716 """ 

717 pass