Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.8/site-packages/sqlalchemy/dialects/sqlite/pysqlite.py: 55%

110 statements  

« prev     ^ index     » next       coverage.py v7.2.7, created at 2023-06-07 06:35 +0000

1# sqlite/pysqlite.py 

2# Copyright (C) 2005-2023 the SQLAlchemy authors and contributors 

3# <see AUTHORS file> 

4# 

5# This module is part of SQLAlchemy and is released under 

6# the MIT License: https://www.opensource.org/licenses/mit-license.php 

7 

8r""" 

9.. dialect:: sqlite+pysqlite 

10 :name: pysqlite 

11 :dbapi: sqlite3 

12 :connectstring: sqlite+pysqlite:///file_path 

13 :url: https://docs.python.org/library/sqlite3.html 

14 

15 Note that ``pysqlite`` is the same driver as the ``sqlite3`` 

16 module included with the Python distribution. 

17 

18Driver 

19------ 

20 

21The ``sqlite3`` Python DBAPI is standard on all modern Python versions; 

22for cPython and Pypy, no additional installation is necessary. 

23 

24 

25Connect Strings 

26--------------- 

27 

28The file specification for the SQLite database is taken as the "database" 

29portion of the URL. Note that the format of a SQLAlchemy url is:: 

30 

31 driver://user:pass@host/database 

32 

33This means that the actual filename to be used starts with the characters to 

34the **right** of the third slash. So connecting to a relative filepath 

35looks like:: 

36 

37 # relative path 

38 e = create_engine('sqlite:///path/to/database.db') 

39 

40An absolute path, which is denoted by starting with a slash, means you 

41need **four** slashes:: 

42 

43 # absolute path 

44 e = create_engine('sqlite:////path/to/database.db') 

45 

46To use a Windows path, regular drive specifications and backslashes can be 

47used. Double backslashes are probably needed:: 

48 

49 # absolute path on Windows 

50 e = create_engine('sqlite:///C:\\path\\to\\database.db') 

51 

52The sqlite ``:memory:`` identifier is the default if no filepath is 

53present. Specify ``sqlite://`` and nothing else:: 

54 

55 # in-memory database 

56 e = create_engine('sqlite://') 

57 

58.. _pysqlite_uri_connections: 

59 

60URI Connections 

61^^^^^^^^^^^^^^^ 

62 

63Modern versions of SQLite support an alternative system of connecting using a 

64`driver level URI <https://www.sqlite.org/uri.html>`_, which has the advantage 

65that additional driver-level arguments can be passed including options such as 

66"read only". The Python sqlite3 driver supports this mode under modern Python 

673 versions. The SQLAlchemy pysqlite driver supports this mode of use by 

68specifying "uri=true" in the URL query string. The SQLite-level "URI" is kept 

69as the "database" portion of the SQLAlchemy url (that is, following a slash):: 

70 

71 e = create_engine("sqlite:///file:path/to/database?mode=ro&uri=true") 

72 

73.. note:: The "uri=true" parameter must appear in the **query string** 

74 of the URL. It will not currently work as expected if it is only 

75 present in the :paramref:`_sa.create_engine.connect_args` 

76 parameter dictionary. 

77 

78The logic reconciles the simultaneous presence of SQLAlchemy's query string and 

79SQLite's query string by separating out the parameters that belong to the 

80Python sqlite3 driver vs. those that belong to the SQLite URI. This is 

81achieved through the use of a fixed list of parameters known to be accepted by 

82the Python side of the driver. For example, to include a URL that indicates 

83the Python sqlite3 "timeout" and "check_same_thread" parameters, along with the 

84SQLite "mode" and "nolock" parameters, they can all be passed together on the 

85query string:: 

86 

87 e = create_engine( 

88 "sqlite:///file:path/to/database?" 

89 "check_same_thread=true&timeout=10&mode=ro&nolock=1&uri=true" 

90 ) 

91 

92Above, the pysqlite / sqlite3 DBAPI would be passed arguments as:: 

93 

94 sqlite3.connect( 

95 "file:path/to/database?mode=ro&nolock=1", 

96 check_same_thread=True, timeout=10, uri=True 

97 ) 

98 

99Regarding future parameters added to either the Python or native drivers. new 

100parameter names added to the SQLite URI scheme should be automatically 

101accommodated by this scheme. New parameter names added to the Python driver 

102side can be accommodated by specifying them in the 

103:paramref:`_sa.create_engine.connect_args` dictionary, 

104until dialect support is 

105added by SQLAlchemy. For the less likely case that the native SQLite driver 

106adds a new parameter name that overlaps with one of the existing, known Python 

107driver parameters (such as "timeout" perhaps), SQLAlchemy's dialect would 

108require adjustment for the URL scheme to continue to support this. 

109 

110As is always the case for all SQLAlchemy dialects, the entire "URL" process 

111can be bypassed in :func:`_sa.create_engine` through the use of the 

112:paramref:`_sa.create_engine.creator` 

113parameter which allows for a custom callable 

114that creates a Python sqlite3 driver level connection directly. 

115 

116.. versionadded:: 1.3.9 

117 

118.. seealso:: 

119 

120 `Uniform Resource Identifiers <https://www.sqlite.org/uri.html>`_ - in 

121 the SQLite documentation 

122 

123.. _pysqlite_regexp: 

124 

125Regular Expression Support 

126--------------------------- 

127 

128.. versionadded:: 1.4 

129 

130Support for the :meth:`_sql.ColumnOperators.regexp_match` operator is provided 

131using Python's re.search_ function. SQLite itself does not include a working 

132regular expression operator; instead, it includes a non-implemented placeholder 

133operator ``REGEXP`` that calls a user-defined function that must be provided. 

134 

135SQLAlchemy's implementation makes use of the pysqlite create_function_ hook 

136as follows:: 

137 

138 

139 def regexp(a, b): 

140 return re.search(a, b) is not None 

141 

142 sqlite_connection.create_function( 

143 "regexp", 2, regexp, 

144 ) 

145 

146There is currently no support for regular expression flags as a separate 

147argument, as these are not supported by SQLite's REGEXP operator, however these 

148may be included inline within the regular expression string. See `Python regular expressions`_ for 

149details. 

150 

151.. seealso:: 

152 

153 `Python regular expressions`_: Documentation for Python's regular expression syntax. 

154 

155.. _create_function: https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.create_function 

156 

157.. _re.search: https://docs.python.org/3/library/re.html#re.search 

158 

159.. _Python regular expressions: https://docs.python.org/3/library/re.html#re.search 

160 

161 

162 

163Compatibility with sqlite3 "native" date and datetime types 

164----------------------------------------------------------- 

165 

166The pysqlite driver includes the sqlite3.PARSE_DECLTYPES and 

167sqlite3.PARSE_COLNAMES options, which have the effect of any column 

168or expression explicitly cast as "date" or "timestamp" will be converted 

169to a Python date or datetime object. The date and datetime types provided 

170with the pysqlite dialect are not currently compatible with these options, 

171since they render the ISO date/datetime including microseconds, which 

172pysqlite's driver does not. Additionally, SQLAlchemy does not at 

173this time automatically render the "cast" syntax required for the 

174freestanding functions "current_timestamp" and "current_date" to return 

175datetime/date types natively. Unfortunately, pysqlite 

176does not provide the standard DBAPI types in ``cursor.description``, 

177leaving SQLAlchemy with no way to detect these types on the fly 

178without expensive per-row type checks. 

179 

180Keeping in mind that pysqlite's parsing option is not recommended, 

181nor should be necessary, for use with SQLAlchemy, usage of PARSE_DECLTYPES 

182can be forced if one configures "native_datetime=True" on create_engine():: 

183 

184 engine = create_engine('sqlite://', 

185 connect_args={'detect_types': 

186 sqlite3.PARSE_DECLTYPES|sqlite3.PARSE_COLNAMES}, 

187 native_datetime=True 

188 ) 

189 

190With this flag enabled, the DATE and TIMESTAMP types (but note - not the 

191DATETIME or TIME types...confused yet ?) will not perform any bind parameter 

192or result processing. Execution of "func.current_date()" will return a string. 

193"func.current_timestamp()" is registered as returning a DATETIME type in 

194SQLAlchemy, so this function still receives SQLAlchemy-level result 

195processing. 

196 

197.. _pysqlite_threading_pooling: 

198 

199Threading/Pooling Behavior 

200--------------------------- 

201 

202Pysqlite's default behavior is to prohibit the usage of a single connection 

203in more than one thread. This is originally intended to work with older 

204versions of SQLite that did not support multithreaded operation under 

205various circumstances. In particular, older SQLite versions 

206did not allow a ``:memory:`` database to be used in multiple threads 

207under any circumstances. 

208 

209Pysqlite does include a now-undocumented flag known as 

210``check_same_thread`` which will disable this check, however note that 

211pysqlite connections are still not safe to use in concurrently in multiple 

212threads. In particular, any statement execution calls would need to be 

213externally mutexed, as Pysqlite does not provide for thread-safe propagation 

214of error messages among other things. So while even ``:memory:`` databases 

215can be shared among threads in modern SQLite, Pysqlite doesn't provide enough 

216thread-safety to make this usage worth it. 

217 

218SQLAlchemy sets up pooling to work with Pysqlite's default behavior: 

219 

220* When a ``:memory:`` SQLite database is specified, the dialect by default 

221 will use :class:`.SingletonThreadPool`. This pool maintains a single 

222 connection per thread, so that all access to the engine within the current 

223 thread use the same ``:memory:`` database - other threads would access a 

224 different ``:memory:`` database. 

225* When a file-based database is specified, the dialect will use 

226 :class:`.NullPool` as the source of connections. This pool closes and 

227 discards connections which are returned to the pool immediately. SQLite 

228 file-based connections have extremely low overhead, so pooling is not 

229 necessary. The scheme also prevents a connection from being used again in 

230 a different thread and works best with SQLite's coarse-grained file locking. 

231 

232Using a Memory Database in Multiple Threads 

233^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

234 

235To use a ``:memory:`` database in a multithreaded scenario, the same 

236connection object must be shared among threads, since the database exists 

237only within the scope of that connection. The 

238:class:`.StaticPool` implementation will maintain a single connection 

239globally, and the ``check_same_thread`` flag can be passed to Pysqlite 

240as ``False``:: 

241 

242 from sqlalchemy.pool import StaticPool 

243 engine = create_engine('sqlite://', 

244 connect_args={'check_same_thread':False}, 

245 poolclass=StaticPool) 

246 

247Note that using a ``:memory:`` database in multiple threads requires a recent 

248version of SQLite. 

249 

250Using Temporary Tables with SQLite 

251^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

252 

253Due to the way SQLite deals with temporary tables, if you wish to use a 

254temporary table in a file-based SQLite database across multiple checkouts 

255from the connection pool, such as when using an ORM :class:`.Session` where 

256the temporary table should continue to remain after :meth:`.Session.commit` or 

257:meth:`.Session.rollback` is called, a pool which maintains a single 

258connection must be used. Use :class:`.SingletonThreadPool` if the scope is 

259only needed within the current thread, or :class:`.StaticPool` is scope is 

260needed within multiple threads for this case:: 

261 

262 # maintain the same connection per thread 

263 from sqlalchemy.pool import SingletonThreadPool 

264 engine = create_engine('sqlite:///mydb.db', 

265 poolclass=SingletonThreadPool) 

266 

267 

268 # maintain the same connection across all threads 

269 from sqlalchemy.pool import StaticPool 

270 engine = create_engine('sqlite:///mydb.db', 

271 poolclass=StaticPool) 

272 

273Note that :class:`.SingletonThreadPool` should be configured for the number 

274of threads that are to be used; beyond that number, connections will be 

275closed out in a non deterministic way. 

276 

277Unicode 

278------- 

279 

280The pysqlite driver only returns Python ``unicode`` objects in result sets, 

281never plain strings, and accommodates ``unicode`` objects within bound 

282parameter values in all cases. Regardless of the SQLAlchemy string type in 

283use, string-based result values will by Python ``unicode`` in Python 2. 

284The :class:`.Unicode` type should still be used to indicate those columns that 

285require unicode, however, so that non-``unicode`` values passed inadvertently 

286will emit a warning. Pysqlite will emit an error if a non-``unicode`` string 

287is passed containing non-ASCII characters. 

288 

289Dealing with Mixed String / Binary Columns in Python 3 

290------------------------------------------------------ 

291 

292The SQLite database is weakly typed, and as such it is possible when using 

293binary values, which in Python 3 are represented as ``b'some string'``, that a 

294particular SQLite database can have data values within different rows where 

295some of them will be returned as a ``b''`` value by the Pysqlite driver, and 

296others will be returned as Python strings, e.g. ``''`` values. This situation 

297is not known to occur if the SQLAlchemy :class:`.LargeBinary` datatype is used 

298consistently, however if a particular SQLite database has data that was 

299inserted using the Pysqlite driver directly, or when using the SQLAlchemy 

300:class:`.String` type which was later changed to :class:`.LargeBinary`, the 

301table will not be consistently readable because SQLAlchemy's 

302:class:`.LargeBinary` datatype does not handle strings so it has no way of 

303"encoding" a value that is in string format. 

304 

305To deal with a SQLite table that has mixed string / binary data in the 

306same column, use a custom type that will check each row individually:: 

307 

308 # note this is Python 3 only 

309 

310 from sqlalchemy import String 

311 from sqlalchemy import TypeDecorator 

312 

313 class MixedBinary(TypeDecorator): 

314 impl = String 

315 cache_ok = True 

316 

317 def process_result_value(self, value, dialect): 

318 if isinstance(value, str): 

319 value = bytes(value, 'utf-8') 

320 elif value is not None: 

321 value = bytes(value) 

322 

323 return value 

324 

325Then use the above ``MixedBinary`` datatype in the place where 

326:class:`.LargeBinary` would normally be used. 

327 

328.. _pysqlite_serializable: 

329 

330Serializable isolation / Savepoints / Transactional DDL 

331------------------------------------------------------- 

332 

333In the section :ref:`sqlite_concurrency`, we refer to the pysqlite 

334driver's assortment of issues that prevent several features of SQLite 

335from working correctly. The pysqlite DBAPI driver has several 

336long-standing bugs which impact the correctness of its transactional 

337behavior. In its default mode of operation, SQLite features such as 

338SERIALIZABLE isolation, transactional DDL, and SAVEPOINT support are 

339non-functional, and in order to use these features, workarounds must 

340be taken. 

341 

342The issue is essentially that the driver attempts to second-guess the user's 

343intent, failing to start transactions and sometimes ending them prematurely, in 

344an effort to minimize the SQLite databases's file locking behavior, even 

345though SQLite itself uses "shared" locks for read-only activities. 

346 

347SQLAlchemy chooses to not alter this behavior by default, as it is the 

348long-expected behavior of the pysqlite driver; if and when the pysqlite 

349driver attempts to repair these issues, that will be more of a driver towards 

350defaults for SQLAlchemy. 

351 

352The good news is that with a few events, we can implement transactional 

353support fully, by disabling pysqlite's feature entirely and emitting BEGIN 

354ourselves. This is achieved using two event listeners:: 

355 

356 from sqlalchemy import create_engine, event 

357 

358 engine = create_engine("sqlite:///myfile.db") 

359 

360 @event.listens_for(engine, "connect") 

361 def do_connect(dbapi_connection, connection_record): 

362 # disable pysqlite's emitting of the BEGIN statement entirely. 

363 # also stops it from emitting COMMIT before any DDL. 

364 dbapi_connection.isolation_level = None 

365 

366 @event.listens_for(engine, "begin") 

367 def do_begin(conn): 

368 # emit our own BEGIN 

369 conn.exec_driver_sql("BEGIN") 

370 

371.. warning:: When using the above recipe, it is advised to not use the 

372 :paramref:`.Connection.execution_options.isolation_level` setting on 

373 :class:`_engine.Connection` and :func:`_sa.create_engine` 

374 with the SQLite driver, 

375 as this function necessarily will also alter the ".isolation_level" setting. 

376 

377 

378Above, we intercept a new pysqlite connection and disable any transactional 

379integration. Then, at the point at which SQLAlchemy knows that transaction 

380scope is to begin, we emit ``"BEGIN"`` ourselves. 

381 

382When we take control of ``"BEGIN"``, we can also control directly SQLite's 

383locking modes, introduced at 

384`BEGIN TRANSACTION <https://sqlite.org/lang_transaction.html>`_, 

385by adding the desired locking mode to our ``"BEGIN"``:: 

386 

387 @event.listens_for(engine, "begin") 

388 def do_begin(conn): 

389 conn.exec_driver_sql("BEGIN EXCLUSIVE") 

390 

391.. seealso:: 

392 

393 `BEGIN TRANSACTION <https://sqlite.org/lang_transaction.html>`_ - 

394 on the SQLite site 

395 

396 `sqlite3 SELECT does not BEGIN a transaction <https://bugs.python.org/issue9924>`_ - 

397 on the Python bug tracker 

398 

399 `sqlite3 module breaks transactions and potentially corrupts data <https://bugs.python.org/issue10740>`_ - 

400 on the Python bug tracker 

401 

402.. _pysqlite_udfs: 

403 

404User-Defined Functions 

405---------------------- 

406 

407pysqlite supports a `create_function() <https://docs.python.org/3/library/sqlite3.html#sqlite3.Connection.create_function>`_ 

408method that allows us to create our own user-defined functions (UDFs) in Python and use them directly in SQLite queries. 

409These functions are registered with a specific DBAPI Connection. 

410 

411SQLAlchemy uses connection pooling with file-based SQLite databases, so we need to ensure that the UDF is attached to the 

412connection when it is created. That is accomplished with an event listener:: 

413 

414 from sqlalchemy import create_engine 

415 from sqlalchemy import event 

416 from sqlalchemy import text 

417 

418 

419 def udf(): 

420 return "udf-ok" 

421 

422 

423 engine = create_engine("sqlite:///./db_file") 

424 

425 

426 @event.listens_for(engine, "connect") 

427 def connect(conn, rec): 

428 conn.create_function("udf", 0, udf) 

429 

430 

431 for i in range(5): 

432 with engine.connect() as conn: 

433 print(conn.scalar(text("SELECT UDF()"))) 

434 

435 

436""" # noqa 

437 

438import os 

439import re 

440 

441from .base import DATE 

442from .base import DATETIME 

443from .base import SQLiteDialect 

444from ... import exc 

445from ... import pool 

446from ... import types as sqltypes 

447from ... import util 

448 

449 

450class _SQLite_pysqliteTimeStamp(DATETIME): 

451 def bind_processor(self, dialect): 

452 if dialect.native_datetime: 

453 return None 

454 else: 

455 return DATETIME.bind_processor(self, dialect) 

456 

457 def result_processor(self, dialect, coltype): 

458 if dialect.native_datetime: 

459 return None 

460 else: 

461 return DATETIME.result_processor(self, dialect, coltype) 

462 

463 

464class _SQLite_pysqliteDate(DATE): 

465 def bind_processor(self, dialect): 

466 if dialect.native_datetime: 

467 return None 

468 else: 

469 return DATE.bind_processor(self, dialect) 

470 

471 def result_processor(self, dialect, coltype): 

472 if dialect.native_datetime: 

473 return None 

474 else: 

475 return DATE.result_processor(self, dialect, coltype) 

476 

477 

478class SQLiteDialect_pysqlite(SQLiteDialect): 

479 default_paramstyle = "qmark" 

480 supports_statement_cache = True 

481 

482 colspecs = util.update_copy( 

483 SQLiteDialect.colspecs, 

484 { 

485 sqltypes.Date: _SQLite_pysqliteDate, 

486 sqltypes.TIMESTAMP: _SQLite_pysqliteTimeStamp, 

487 }, 

488 ) 

489 

490 if not util.py2k: 

491 description_encoding = None 

492 

493 driver = "pysqlite" 

494 

495 @classmethod 

496 def dbapi(cls): 

497 if util.py2k: 

498 try: 

499 from pysqlite2 import dbapi2 as sqlite 

500 except ImportError: 

501 try: 

502 from sqlite3 import dbapi2 as sqlite 

503 except ImportError as e: 

504 raise e 

505 else: 

506 from sqlite3 import dbapi2 as sqlite 

507 return sqlite 

508 

509 @classmethod 

510 def _is_url_file_db(cls, url): 

511 if (url.database and url.database != ":memory:") and ( 

512 url.query.get("mode", None) != "memory" 

513 ): 

514 return True 

515 else: 

516 return False 

517 

518 @classmethod 

519 def get_pool_class(cls, url): 

520 if cls._is_url_file_db(url): 

521 return pool.NullPool 

522 else: 

523 return pool.SingletonThreadPool 

524 

525 def _get_server_version_info(self, connection): 

526 return self.dbapi.sqlite_version_info 

527 

528 _isolation_lookup = SQLiteDialect._isolation_lookup.union( 

529 { 

530 "AUTOCOMMIT": None, 

531 } 

532 ) 

533 

534 def set_isolation_level(self, connection, level): 

535 if hasattr(connection, "dbapi_connection"): 

536 dbapi_connection = connection.dbapi_connection 

537 else: 

538 dbapi_connection = connection 

539 

540 if level == "AUTOCOMMIT": 

541 dbapi_connection.isolation_level = None 

542 else: 

543 dbapi_connection.isolation_level = "" 

544 return super(SQLiteDialect_pysqlite, self).set_isolation_level( 

545 connection, level 

546 ) 

547 

548 def on_connect(self): 

549 connect = super(SQLiteDialect_pysqlite, self).on_connect() 

550 

551 def regexp(a, b): 

552 if b is None: 

553 return None 

554 return re.search(a, b) is not None 

555 

556 def set_regexp(connection): 

557 if hasattr(connection, "dbapi_connection"): 

558 dbapi_connection = connection.dbapi_connection 

559 else: 

560 dbapi_connection = connection 

561 dbapi_connection.create_function( 

562 "regexp", 

563 2, 

564 regexp, 

565 ) 

566 

567 fns = [set_regexp] 

568 

569 if self.isolation_level is not None: 

570 

571 def iso_level(conn): 

572 self.set_isolation_level(conn, self.isolation_level) 

573 

574 fns.append(iso_level) 

575 

576 def connect(conn): 

577 for fn in fns: 

578 fn(conn) 

579 

580 return connect 

581 

582 def create_connect_args(self, url): 

583 if url.username or url.password or url.host or url.port: 

584 raise exc.ArgumentError( 

585 "Invalid SQLite URL: %s\n" 

586 "Valid SQLite URL forms are:\n" 

587 " sqlite:///:memory: (or, sqlite://)\n" 

588 " sqlite:///relative/path/to/file.db\n" 

589 " sqlite:////absolute/path/to/file.db" % (url,) 

590 ) 

591 

592 # theoretically, this list can be augmented, at least as far as 

593 # parameter names accepted by sqlite3/pysqlite, using 

594 # inspect.getfullargspec(). for the moment this seems like overkill 

595 # as these parameters don't change very often, and as always, 

596 # parameters passed to connect_args will always go to the 

597 # sqlite3/pysqlite driver. 

598 pysqlite_args = [ 

599 ("uri", bool), 

600 ("timeout", float), 

601 ("isolation_level", str), 

602 ("detect_types", int), 

603 ("check_same_thread", bool), 

604 ("cached_statements", int), 

605 ] 

606 opts = url.query 

607 pysqlite_opts = {} 

608 for key, type_ in pysqlite_args: 

609 util.coerce_kw_type(opts, key, type_, dest=pysqlite_opts) 

610 

611 if pysqlite_opts.get("uri", False): 

612 uri_opts = dict(opts) 

613 # here, we are actually separating the parameters that go to 

614 # sqlite3/pysqlite vs. those that go the SQLite URI. What if 

615 # two names conflict? again, this seems to be not the case right 

616 # now, and in the case that new names are added to 

617 # either side which overlap, again the sqlite3/pysqlite parameters 

618 # can be passed through connect_args instead of in the URL. 

619 # If SQLite native URIs add a parameter like "timeout" that 

620 # we already have listed here for the python driver, then we need 

621 # to adjust for that here. 

622 for key, type_ in pysqlite_args: 

623 uri_opts.pop(key, None) 

624 filename = url.database 

625 if uri_opts: 

626 # sorting of keys is for unit test support 

627 filename += "?" + ( 

628 "&".join( 

629 "%s=%s" % (key, uri_opts[key]) 

630 for key in sorted(uri_opts) 

631 ) 

632 ) 

633 else: 

634 filename = url.database or ":memory:" 

635 if filename != ":memory:": 

636 filename = os.path.abspath(filename) 

637 

638 return ([filename], pysqlite_opts) 

639 

640 def is_disconnect(self, e, connection, cursor): 

641 return isinstance( 

642 e, self.dbapi.ProgrammingError 

643 ) and "Cannot operate on a closed database." in str(e) 

644 

645 

646dialect = SQLiteDialect_pysqlite