Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.11/site-packages/SQLAlchemy-1.3.25.dev0-py3.11-linux-x86_64.egg/sqlalchemy/dialects/sqlite/base.py: 35%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

635 statements  

1# sqlite/base.py 

2# Copyright (C) 2005-2021 the SQLAlchemy authors and contributors 

3# <see AUTHORS file> 

4# 

5# This module is part of SQLAlchemy and is released under 

6# the MIT License: http://www.opensource.org/licenses/mit-license.php 

7 

8r""" 

9.. dialect:: sqlite 

10 :name: SQLite 

11 

12.. _sqlite_datetime: 

13 

14Date and Time Types 

15------------------- 

16 

17SQLite does not have built-in DATE, TIME, or DATETIME types, and pysqlite does 

18not provide out of the box functionality for translating values between Python 

19`datetime` objects and a SQLite-supported format. SQLAlchemy's own 

20:class:`~sqlalchemy.types.DateTime` and related types provide date formatting 

21and parsing functionality when SQLite is used. The implementation classes are 

22:class:`_sqlite.DATETIME`, :class:`_sqlite.DATE` and :class:`_sqlite.TIME`. 

23These types represent dates and times as ISO formatted strings, which also 

24nicely support ordering. There's no reliance on typical "libc" internals for 

25these functions so historical dates are fully supported. 

26 

27Ensuring Text affinity 

28^^^^^^^^^^^^^^^^^^^^^^ 

29 

30The DDL rendered for these types is the standard ``DATE``, ``TIME`` 

31and ``DATETIME`` indicators. However, custom storage formats can also be 

32applied to these types. When the 

33storage format is detected as containing no alpha characters, the DDL for 

34these types is rendered as ``DATE_CHAR``, ``TIME_CHAR``, and ``DATETIME_CHAR``, 

35so that the column continues to have textual affinity. 

36 

37.. seealso:: 

38 

39 `Type Affinity <http://www.sqlite.org/datatype3.html#affinity>`_ - 

40 in the SQLite documentation 

41 

42.. _sqlite_autoincrement: 

43 

44SQLite Auto Incrementing Behavior 

45---------------------------------- 

46 

47Background on SQLite's autoincrement is at: http://sqlite.org/autoinc.html 

48 

49Key concepts: 

50 

51* SQLite has an implicit "auto increment" feature that takes place for any 

52 non-composite primary-key column that is specifically created using 

53 "INTEGER PRIMARY KEY" for the type + primary key. 

54 

55* SQLite also has an explicit "AUTOINCREMENT" keyword, that is **not** 

56 equivalent to the implicit autoincrement feature; this keyword is not 

57 recommended for general use. SQLAlchemy does not render this keyword 

58 unless a special SQLite-specific directive is used (see below). However, 

59 it still requires that the column's type is named "INTEGER". 

60 

61Using the AUTOINCREMENT Keyword 

62^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

63 

64To specifically render the AUTOINCREMENT keyword on the primary key column 

65when rendering DDL, add the flag ``sqlite_autoincrement=True`` to the Table 

66construct:: 

67 

68 Table('sometable', metadata, 

69 Column('id', Integer, primary_key=True), 

70 sqlite_autoincrement=True) 

71 

72Allowing autoincrement behavior SQLAlchemy types other than Integer/INTEGER 

73^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

74 

75SQLite's typing model is based on naming conventions. Among other things, this 

76means that any type name which contains the substring ``"INT"`` will be 

77determined to be of "integer affinity". A type named ``"BIGINT"``, 

78``"SPECIAL_INT"`` or even ``"XYZINTQPR"``, will be considered by SQLite to be 

79of "integer" affinity. However, **the SQLite autoincrement feature, whether 

80implicitly or explicitly enabled, requires that the name of the column's type 

81is exactly the string "INTEGER"**. Therefore, if an application uses a type 

82like :class:`.BigInteger` for a primary key, on SQLite this type will need to 

83be rendered as the name ``"INTEGER"`` when emitting the initial ``CREATE 

84TABLE`` statement in order for the autoincrement behavior to be available. 

85 

86One approach to achieve this is to use :class:`.Integer` on SQLite 

87only using :meth:`.TypeEngine.with_variant`:: 

88 

89 table = Table( 

90 "my_table", metadata, 

91 Column("id", BigInteger().with_variant(Integer, "sqlite"), primary_key=True) 

92 ) 

93 

94Another is to use a subclass of :class:`.BigInteger` that overrides its DDL 

95name to be ``INTEGER`` when compiled against SQLite:: 

96 

97 from sqlalchemy import BigInteger 

98 from sqlalchemy.ext.compiler import compiles 

99 

100 class SLBigInteger(BigInteger): 

101 pass 

102 

103 @compiles(SLBigInteger, 'sqlite') 

104 def bi_c(element, compiler, **kw): 

105 return "INTEGER" 

106 

107 @compiles(SLBigInteger) 

108 def bi_c(element, compiler, **kw): 

109 return compiler.visit_BIGINT(element, **kw) 

110 

111 

112 table = Table( 

113 "my_table", metadata, 

114 Column("id", SLBigInteger(), primary_key=True) 

115 ) 

116 

117.. seealso:: 

118 

119 :meth:`.TypeEngine.with_variant` 

120 

121 :ref:`sqlalchemy.ext.compiler_toplevel` 

122 

123 `Datatypes In SQLite Version 3 <http://sqlite.org/datatype3.html>`_ 

124 

125.. _sqlite_concurrency: 

126 

127Database Locking Behavior / Concurrency 

128--------------------------------------- 

129 

130SQLite is not designed for a high level of write concurrency. The database 

131itself, being a file, is locked completely during write operations within 

132transactions, meaning exactly one "connection" (in reality a file handle) 

133has exclusive access to the database during this period - all other 

134"connections" will be blocked during this time. 

135 

136The Python DBAPI specification also calls for a connection model that is 

137always in a transaction; there is no ``connection.begin()`` method, 

138only ``connection.commit()`` and ``connection.rollback()``, upon which a 

139new transaction is to be begun immediately. This may seem to imply 

140that the SQLite driver would in theory allow only a single filehandle on a 

141particular database file at any time; however, there are several 

142factors both within SQLite itself as well as within the pysqlite driver 

143which loosen this restriction significantly. 

144 

145However, no matter what locking modes are used, SQLite will still always 

146lock the database file once a transaction is started and DML (e.g. INSERT, 

147UPDATE, DELETE) has at least been emitted, and this will block 

148other transactions at least at the point that they also attempt to emit DML. 

149By default, the length of time on this block is very short before it times out 

150with an error. 

151 

152This behavior becomes more critical when used in conjunction with the 

153SQLAlchemy ORM. SQLAlchemy's :class:`.Session` object by default runs 

154within a transaction, and with its autoflush model, may emit DML preceding 

155any SELECT statement. This may lead to a SQLite database that locks 

156more quickly than is expected. The locking mode of SQLite and the pysqlite 

157driver can be manipulated to some degree, however it should be noted that 

158achieving a high degree of write-concurrency with SQLite is a losing battle. 

159 

160For more information on SQLite's lack of write concurrency by design, please 

161see 

162`Situations Where Another RDBMS May Work Better - High Concurrency 

163<http://www.sqlite.org/whentouse.html>`_ near the bottom of the page. 

164 

165The following subsections introduce areas that are impacted by SQLite's 

166file-based architecture and additionally will usually require workarounds to 

167work when using the pysqlite driver. 

168 

169.. _sqlite_isolation_level: 

170 

171Transaction Isolation Level / Autocommit 

172---------------------------------------- 

173 

174SQLite supports "transaction isolation" in a non-standard way, along two 

175axes. One is that of the 

176`PRAGMA read_uncommitted <http://www.sqlite.org/pragma.html#pragma_read_uncommitted>`_ 

177instruction. This setting can essentially switch SQLite between its 

178default mode of ``SERIALIZABLE`` isolation, and a "dirty read" isolation 

179mode normally referred to as ``READ UNCOMMITTED``. 

180 

181SQLAlchemy ties into this PRAGMA statement using the 

182:paramref:`_sa.create_engine.isolation_level` parameter of 

183:func:`_sa.create_engine`. 

184Valid values for this parameter when used with SQLite are ``"SERIALIZABLE"`` 

185and ``"READ UNCOMMITTED"`` corresponding to a value of 0 and 1, respectively. 

186SQLite defaults to ``SERIALIZABLE``, however its behavior is impacted by 

187the pysqlite driver's default behavior. 

188 

189When using the pysqlite driver, the ``"AUTOCOMMIT"`` isolation level is also 

190available, which will alter the pysqlite connection using the ``.isolation_level`` 

191attribute on the DBAPI connection and set it to None for the duration 

192of the setting. 

193 

194.. versionadded:: 1.3.16 added support for SQLite AUTOCOMMIT isolation level 

195 when using the pysqlite / sqlite3 SQLite driver. 

196 

197 

198The other axis along which SQLite's transactional locking is impacted is 

199via the nature of the ``BEGIN`` statement used. The three varieties 

200are "deferred", "immediate", and "exclusive", as described at 

201`BEGIN TRANSACTION <http://sqlite.org/lang_transaction.html>`_. A straight 

202``BEGIN`` statement uses the "deferred" mode, where the database file is 

203not locked until the first read or write operation, and read access remains 

204open to other transactions until the first write operation. But again, 

205it is critical to note that the pysqlite driver interferes with this behavior 

206by *not even emitting BEGIN* until the first write operation. 

207 

208.. warning:: 

209 

210 SQLite's transactional scope is impacted by unresolved 

211 issues in the pysqlite driver, which defers BEGIN statements to a greater 

212 degree than is often feasible. See the section :ref:`pysqlite_serializable` 

213 for techniques to work around this behavior. 

214 

215.. seealso:: 

216 

217 :ref:`dbapi_autocommit` 

218 

219SAVEPOINT Support 

220---------------------------- 

221 

222SQLite supports SAVEPOINTs, which only function once a transaction is 

223begun. SQLAlchemy's SAVEPOINT support is available using the 

224:meth:`_engine.Connection.begin_nested` method at the Core level, and 

225:meth:`.Session.begin_nested` at the ORM level. However, SAVEPOINTs 

226won't work at all with pysqlite unless workarounds are taken. 

227 

228.. warning:: 

229 

230 SQLite's SAVEPOINT feature is impacted by unresolved 

231 issues in the pysqlite driver, which defers BEGIN statements to a greater 

232 degree than is often feasible. See the section :ref:`pysqlite_serializable` 

233 for techniques to work around this behavior. 

234 

235Transactional DDL 

236---------------------------- 

237 

238The SQLite database supports transactional :term:`DDL` as well. 

239In this case, the pysqlite driver is not only failing to start transactions, 

240it also is ending any existing transaction when DDL is detected, so again, 

241workarounds are required. 

242 

243.. warning:: 

244 

245 SQLite's transactional DDL is impacted by unresolved issues 

246 in the pysqlite driver, which fails to emit BEGIN and additionally 

247 forces a COMMIT to cancel any transaction when DDL is encountered. 

248 See the section :ref:`pysqlite_serializable` 

249 for techniques to work around this behavior. 

250 

251.. _sqlite_foreign_keys: 

252 

253Foreign Key Support 

254------------------- 

255 

256SQLite supports FOREIGN KEY syntax when emitting CREATE statements for tables, 

257however by default these constraints have no effect on the operation of the 

258table. 

259 

260Constraint checking on SQLite has three prerequisites: 

261 

262* At least version 3.6.19 of SQLite must be in use 

263* The SQLite library must be compiled *without* the SQLITE_OMIT_FOREIGN_KEY 

264 or SQLITE_OMIT_TRIGGER symbols enabled. 

265* The ``PRAGMA foreign_keys = ON`` statement must be emitted on all 

266 connections before use. 

267 

268SQLAlchemy allows for the ``PRAGMA`` statement to be emitted automatically for 

269new connections through the usage of events:: 

270 

271 from sqlalchemy.engine import Engine 

272 from sqlalchemy import event 

273 

274 @event.listens_for(Engine, "connect") 

275 def set_sqlite_pragma(dbapi_connection, connection_record): 

276 cursor = dbapi_connection.cursor() 

277 cursor.execute("PRAGMA foreign_keys=ON") 

278 cursor.close() 

279 

280.. warning:: 

281 

282 When SQLite foreign keys are enabled, it is **not possible** 

283 to emit CREATE or DROP statements for tables that contain 

284 mutually-dependent foreign key constraints; 

285 to emit the DDL for these tables requires that ALTER TABLE be used to 

286 create or drop these constraints separately, for which SQLite has 

287 no support. 

288 

289.. seealso:: 

290 

291 `SQLite Foreign Key Support <http://www.sqlite.org/foreignkeys.html>`_ 

292 - on the SQLite web site. 

293 

294 :ref:`event_toplevel` - SQLAlchemy event API. 

295 

296 :ref:`use_alter` - more information on SQLAlchemy's facilities for handling 

297 mutually-dependent foreign key constraints. 

298 

299.. _sqlite_on_conflict_ddl: 

300 

301ON CONFLICT support for constraints 

302----------------------------------- 

303 

304SQLite supports a non-standard clause known as ON CONFLICT which can be applied 

305to primary key, unique, check, and not null constraints. In DDL, it is 

306rendered either within the "CONSTRAINT" clause or within the column definition 

307itself depending on the location of the target constraint. To render this 

308clause within DDL, the extension parameter ``sqlite_on_conflict`` can be 

309specified with a string conflict resolution algorithm within the 

310:class:`.PrimaryKeyConstraint`, :class:`.UniqueConstraint`, 

311:class:`.CheckConstraint` objects. Within the :class:`_schema.Column` object, 

312there 

313are individual parameters ``sqlite_on_conflict_not_null``, 

314``sqlite_on_conflict_primary_key``, ``sqlite_on_conflict_unique`` which each 

315correspond to the three types of relevant constraint types that can be 

316indicated from a :class:`_schema.Column` object. 

317 

318.. seealso:: 

319 

320 `ON CONFLICT <https://www.sqlite.org/lang_conflict.html>`_ - in the SQLite 

321 documentation 

322 

323.. versionadded:: 1.3 

324 

325 

326The ``sqlite_on_conflict`` parameters accept a string argument which is just 

327the resolution name to be chosen, which on SQLite can be one of ROLLBACK, 

328ABORT, FAIL, IGNORE, and REPLACE. For example, to add a UNIQUE constraint 

329that specifies the IGNORE algorithm:: 

330 

331 some_table = Table( 

332 'some_table', metadata, 

333 Column('id', Integer, primary_key=True), 

334 Column('data', Integer), 

335 UniqueConstraint('id', 'data', sqlite_on_conflict='IGNORE') 

336 ) 

337 

338The above renders CREATE TABLE DDL as:: 

339 

340 CREATE TABLE some_table ( 

341 id INTEGER NOT NULL, 

342 data INTEGER, 

343 PRIMARY KEY (id), 

344 UNIQUE (id, data) ON CONFLICT IGNORE 

345 ) 

346 

347 

348When using the :paramref:`_schema.Column.unique` 

349flag to add a UNIQUE constraint 

350to a single column, the ``sqlite_on_conflict_unique`` parameter can 

351be added to the :class:`_schema.Column` as well, which will be added to the 

352UNIQUE constraint in the DDL:: 

353 

354 some_table = Table( 

355 'some_table', metadata, 

356 Column('id', Integer, primary_key=True), 

357 Column('data', Integer, unique=True, 

358 sqlite_on_conflict_unique='IGNORE') 

359 ) 

360 

361rendering:: 

362 

363 CREATE TABLE some_table ( 

364 id INTEGER NOT NULL, 

365 data INTEGER, 

366 PRIMARY KEY (id), 

367 UNIQUE (data) ON CONFLICT IGNORE 

368 ) 

369 

370To apply the FAIL algorithm for a NOT NULL constraint, 

371``sqlite_on_conflict_not_null`` is used:: 

372 

373 some_table = Table( 

374 'some_table', metadata, 

375 Column('id', Integer, primary_key=True), 

376 Column('data', Integer, nullable=False, 

377 sqlite_on_conflict_not_null='FAIL') 

378 ) 

379 

380this renders the column inline ON CONFLICT phrase:: 

381 

382 CREATE TABLE some_table ( 

383 id INTEGER NOT NULL, 

384 data INTEGER NOT NULL ON CONFLICT FAIL, 

385 PRIMARY KEY (id) 

386 ) 

387 

388 

389Similarly, for an inline primary key, use ``sqlite_on_conflict_primary_key``:: 

390 

391 some_table = Table( 

392 'some_table', metadata, 

393 Column('id', Integer, primary_key=True, 

394 sqlite_on_conflict_primary_key='FAIL') 

395 ) 

396 

397SQLAlchemy renders the PRIMARY KEY constraint separately, so the conflict 

398resolution algorithm is applied to the constraint itself:: 

399 

400 CREATE TABLE some_table ( 

401 id INTEGER NOT NULL, 

402 PRIMARY KEY (id) ON CONFLICT FAIL 

403 ) 

404 

405.. _sqlite_type_reflection: 

406 

407Type Reflection 

408--------------- 

409 

410SQLite types are unlike those of most other database backends, in that 

411the string name of the type usually does not correspond to a "type" in a 

412one-to-one fashion. Instead, SQLite links per-column typing behavior 

413to one of five so-called "type affinities" based on a string matching 

414pattern for the type. 

415 

416SQLAlchemy's reflection process, when inspecting types, uses a simple 

417lookup table to link the keywords returned to provided SQLAlchemy types. 

418This lookup table is present within the SQLite dialect as it is for all 

419other dialects. However, the SQLite dialect has a different "fallback" 

420routine for when a particular type name is not located in the lookup map; 

421it instead implements the SQLite "type affinity" scheme located at 

422http://www.sqlite.org/datatype3.html section 2.1. 

423 

424The provided typemap will make direct associations from an exact string 

425name match for the following types: 

426 

427:class:`_types.BIGINT`, :class:`_types.BLOB`, 

428:class:`_types.BOOLEAN`, :class:`_types.BOOLEAN`, 

429:class:`_types.CHAR`, :class:`_types.DATE`, 

430:class:`_types.DATETIME`, :class:`_types.FLOAT`, 

431:class:`_types.DECIMAL`, :class:`_types.FLOAT`, 

432:class:`_types.INTEGER`, :class:`_types.INTEGER`, 

433:class:`_types.NUMERIC`, :class:`_types.REAL`, 

434:class:`_types.SMALLINT`, :class:`_types.TEXT`, 

435:class:`_types.TIME`, :class:`_types.TIMESTAMP`, 

436:class:`_types.VARCHAR`, :class:`_types.NVARCHAR`, 

437:class:`_types.NCHAR` 

438 

439When a type name does not match one of the above types, the "type affinity" 

440lookup is used instead: 

441 

442* :class:`_types.INTEGER` is returned if the type name includes the 

443 string ``INT`` 

444* :class:`_types.TEXT` is returned if the type name includes the 

445 string ``CHAR``, ``CLOB`` or ``TEXT`` 

446* :class:`_types.NullType` is returned if the type name includes the 

447 string ``BLOB`` 

448* :class:`_types.REAL` is returned if the type name includes the string 

449 ``REAL``, ``FLOA`` or ``DOUB``. 

450* Otherwise, the :class:`_types.NUMERIC` type is used. 

451 

452.. versionadded:: 0.9.3 Support for SQLite type affinity rules when reflecting 

453 columns. 

454 

455 

456.. _sqlite_partial_index: 

457 

458Partial Indexes 

459--------------- 

460 

461A partial index, e.g. one which uses a WHERE clause, can be specified 

462with the DDL system using the argument ``sqlite_where``:: 

463 

464 tbl = Table('testtbl', m, Column('data', Integer)) 

465 idx = Index('test_idx1', tbl.c.data, 

466 sqlite_where=and_(tbl.c.data > 5, tbl.c.data < 10)) 

467 

468The index will be rendered at create time as:: 

469 

470 CREATE INDEX test_idx1 ON testtbl (data) 

471 WHERE data > 5 AND data < 10 

472 

473.. versionadded:: 0.9.9 

474 

475.. _sqlite_dotted_column_names: 

476 

477Dotted Column Names 

478------------------- 

479 

480Using table or column names that explicitly have periods in them is 

481**not recommended**. While this is generally a bad idea for relational 

482databases in general, as the dot is a syntactically significant character, 

483the SQLite driver up until version **3.10.0** of SQLite has a bug which 

484requires that SQLAlchemy filter out these dots in result sets. 

485 

486.. versionchanged:: 1.1 

487 

488 The following SQLite issue has been resolved as of version 3.10.0 

489 of SQLite. SQLAlchemy as of **1.1** automatically disables its internal 

490 workarounds based on detection of this version. 

491 

492The bug, entirely outside of SQLAlchemy, can be illustrated thusly:: 

493 

494 import sqlite3 

495 

496 assert sqlite3.sqlite_version_info < (3, 10, 0), "bug is fixed in this version" 

497 

498 conn = sqlite3.connect(":memory:") 

499 cursor = conn.cursor() 

500 

501 cursor.execute("create table x (a integer, b integer)") 

502 cursor.execute("insert into x (a, b) values (1, 1)") 

503 cursor.execute("insert into x (a, b) values (2, 2)") 

504 

505 cursor.execute("select x.a, x.b from x") 

506 assert [c[0] for c in cursor.description] == ['a', 'b'] 

507 

508 cursor.execute(''' 

509 select x.a, x.b from x where a=1 

510 union 

511 select x.a, x.b from x where a=2 

512 ''') 

513 assert [c[0] for c in cursor.description] == ['a', 'b'], \ 

514 [c[0] for c in cursor.description] 

515 

516The second assertion fails:: 

517 

518 Traceback (most recent call last): 

519 File "test.py", line 19, in <module> 

520 [c[0] for c in cursor.description] 

521 AssertionError: ['x.a', 'x.b'] 

522 

523Where above, the driver incorrectly reports the names of the columns 

524including the name of the table, which is entirely inconsistent vs. 

525when the UNION is not present. 

526 

527SQLAlchemy relies upon column names being predictable in how they match 

528to the original statement, so the SQLAlchemy dialect has no choice but 

529to filter these out:: 

530 

531 

532 from sqlalchemy import create_engine 

533 

534 eng = create_engine("sqlite://") 

535 conn = eng.connect() 

536 

537 conn.execute("create table x (a integer, b integer)") 

538 conn.execute("insert into x (a, b) values (1, 1)") 

539 conn.execute("insert into x (a, b) values (2, 2)") 

540 

541 result = conn.execute("select x.a, x.b from x") 

542 assert result.keys() == ["a", "b"] 

543 

544 result = conn.execute(''' 

545 select x.a, x.b from x where a=1 

546 union 

547 select x.a, x.b from x where a=2 

548 ''') 

549 assert result.keys() == ["a", "b"] 

550 

551Note that above, even though SQLAlchemy filters out the dots, *both 

552names are still addressable*:: 

553 

554 >>> row = result.first() 

555 >>> row["a"] 

556 1 

557 >>> row["x.a"] 

558 1 

559 >>> row["b"] 

560 1 

561 >>> row["x.b"] 

562 1 

563 

564Therefore, the workaround applied by SQLAlchemy only impacts 

565:meth:`_engine.ResultProxy.keys` and :meth:`.RowProxy.keys()` 

566in the public API. In 

567the very specific case where an application is forced to use column names that 

568contain dots, and the functionality of :meth:`_engine.ResultProxy.keys` and 

569:meth:`.RowProxy.keys()` is required to return these dotted names unmodified, 

570the ``sqlite_raw_colnames`` execution option may be provided, either on a 

571per-:class:`_engine.Connection` basis:: 

572 

573 result = conn.execution_options(sqlite_raw_colnames=True).execute(''' 

574 select x.a, x.b from x where a=1 

575 union 

576 select x.a, x.b from x where a=2 

577 ''') 

578 assert result.keys() == ["x.a", "x.b"] 

579 

580or on a per-:class:`_engine.Engine` basis:: 

581 

582 engine = create_engine("sqlite://", execution_options={"sqlite_raw_colnames": True}) 

583 

584When using the per-:class:`_engine.Engine` execution option, note that 

585**Core and ORM queries that use UNION may not function properly**. 

586 

587SQLite-specific table options 

588----------------------------- 

589 

590One option for CREATE TABLE is supported directly by the SQLite 

591dialect in conjunction with the :class:`_schema.Table` construct: 

592 

593* ``WITHOUT ROWID``:: 

594 

595 Table("some_table", metadata, ..., sqlite_with_rowid=False) 

596 

597.. seealso:: 

598 

599 `SQLite CREATE TABLE options 

600 <https://www.sqlite.org/lang_createtable.html>`_ 

601 

602""" # noqa 

603 

604import datetime 

605import numbers 

606import re 

607 

608from .json import JSON 

609from .json import JSONIndexType 

610from .json import JSONPathType 

611from ... import exc 

612from ... import processors 

613from ... import schema as sa_schema 

614from ... import sql 

615from ... import types as sqltypes 

616from ... import util 

617from ...engine import default 

618from ...engine import reflection 

619from ...sql import ColumnElement 

620from ...sql import compiler 

621from ...sql import schema 

622from ...types import BLOB # noqa 

623from ...types import BOOLEAN # noqa 

624from ...types import CHAR # noqa 

625from ...types import DECIMAL # noqa 

626from ...types import FLOAT # noqa 

627from ...types import INTEGER # noqa 

628from ...types import NUMERIC # noqa 

629from ...types import REAL # noqa 

630from ...types import SMALLINT # noqa 

631from ...types import TEXT # noqa 

632from ...types import TIMESTAMP # noqa 

633from ...types import VARCHAR # noqa 

634 

635 

636class _SQliteJson(JSON): 

637 def result_processor(self, dialect, coltype): 

638 default_processor = super(_SQliteJson, self).result_processor( 

639 dialect, coltype 

640 ) 

641 

642 def process(value): 

643 try: 

644 return default_processor(value) 

645 except TypeError: 

646 if isinstance(value, numbers.Number): 

647 return value 

648 else: 

649 raise 

650 

651 return process 

652 

653 

654class _DateTimeMixin(object): 

655 _reg = None 

656 _storage_format = None 

657 

658 def __init__(self, storage_format=None, regexp=None, **kw): 

659 super(_DateTimeMixin, self).__init__(**kw) 

660 if regexp is not None: 

661 self._reg = re.compile(regexp) 

662 if storage_format is not None: 

663 self._storage_format = storage_format 

664 

665 @property 

666 def format_is_text_affinity(self): 

667 """return True if the storage format will automatically imply 

668 a TEXT affinity. 

669 

670 If the storage format contains no non-numeric characters, 

671 it will imply a NUMERIC storage format on SQLite; in this case, 

672 the type will generate its DDL as DATE_CHAR, DATETIME_CHAR, 

673 TIME_CHAR. 

674 

675 .. versionadded:: 1.0.0 

676 

677 """ 

678 spec = self._storage_format % { 

679 "year": 0, 

680 "month": 0, 

681 "day": 0, 

682 "hour": 0, 

683 "minute": 0, 

684 "second": 0, 

685 "microsecond": 0, 

686 } 

687 return bool(re.search(r"[^0-9]", spec)) 

688 

689 def adapt(self, cls, **kw): 

690 if issubclass(cls, _DateTimeMixin): 

691 if self._storage_format: 

692 kw["storage_format"] = self._storage_format 

693 if self._reg: 

694 kw["regexp"] = self._reg 

695 return super(_DateTimeMixin, self).adapt(cls, **kw) 

696 

697 def literal_processor(self, dialect): 

698 bp = self.bind_processor(dialect) 

699 

700 def process(value): 

701 return "'%s'" % bp(value) 

702 

703 return process 

704 

705 

706class DATETIME(_DateTimeMixin, sqltypes.DateTime): 

707 r"""Represent a Python datetime object in SQLite using a string. 

708 

709 The default string storage format is:: 

710 

711 "%(year)04d-%(month)02d-%(day)02d %(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d" 

712 

713 e.g.:: 

714 

715 2011-03-15 12:05:57.10558 

716 

717 The storage format can be customized to some degree using the 

718 ``storage_format`` and ``regexp`` parameters, such as:: 

719 

720 import re 

721 from sqlalchemy.dialects.sqlite import DATETIME 

722 

723 dt = DATETIME(storage_format="%(year)04d/%(month)02d/%(day)02d " 

724 "%(hour)02d:%(minute)02d:%(second)02d", 

725 regexp=r"(\d+)/(\d+)/(\d+) (\d+)-(\d+)-(\d+)" 

726 ) 

727 

728 :param storage_format: format string which will be applied to the dict 

729 with keys year, month, day, hour, minute, second, and microsecond. 

730 

731 :param regexp: regular expression which will be applied to incoming result 

732 rows. If the regexp contains named groups, the resulting match dict is 

733 applied to the Python datetime() constructor as keyword arguments. 

734 Otherwise, if positional groups are used, the datetime() constructor 

735 is called with positional arguments via 

736 ``*map(int, match_obj.groups(0))``. 

737 

738 """ # noqa 

739 

740 _storage_format = ( 

741 "%(year)04d-%(month)02d-%(day)02d " 

742 "%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d" 

743 ) 

744 

745 def __init__(self, *args, **kwargs): 

746 truncate_microseconds = kwargs.pop("truncate_microseconds", False) 

747 super(DATETIME, self).__init__(*args, **kwargs) 

748 if truncate_microseconds: 

749 assert "storage_format" not in kwargs, ( 

750 "You can specify only " 

751 "one of truncate_microseconds or storage_format." 

752 ) 

753 assert "regexp" not in kwargs, ( 

754 "You can specify only one of " 

755 "truncate_microseconds or regexp." 

756 ) 

757 self._storage_format = ( 

758 "%(year)04d-%(month)02d-%(day)02d " 

759 "%(hour)02d:%(minute)02d:%(second)02d" 

760 ) 

761 

762 def bind_processor(self, dialect): 

763 datetime_datetime = datetime.datetime 

764 datetime_date = datetime.date 

765 format_ = self._storage_format 

766 

767 def process(value): 

768 if value is None: 

769 return None 

770 elif isinstance(value, datetime_datetime): 

771 return format_ % { 

772 "year": value.year, 

773 "month": value.month, 

774 "day": value.day, 

775 "hour": value.hour, 

776 "minute": value.minute, 

777 "second": value.second, 

778 "microsecond": value.microsecond, 

779 } 

780 elif isinstance(value, datetime_date): 

781 return format_ % { 

782 "year": value.year, 

783 "month": value.month, 

784 "day": value.day, 

785 "hour": 0, 

786 "minute": 0, 

787 "second": 0, 

788 "microsecond": 0, 

789 } 

790 else: 

791 raise TypeError( 

792 "SQLite DateTime type only accepts Python " 

793 "datetime and date objects as input." 

794 ) 

795 

796 return process 

797 

798 def result_processor(self, dialect, coltype): 

799 if self._reg: 

800 return processors.str_to_datetime_processor_factory( 

801 self._reg, datetime.datetime 

802 ) 

803 else: 

804 return processors.str_to_datetime 

805 

806 

807class DATE(_DateTimeMixin, sqltypes.Date): 

808 r"""Represent a Python date object in SQLite using a string. 

809 

810 The default string storage format is:: 

811 

812 "%(year)04d-%(month)02d-%(day)02d" 

813 

814 e.g.:: 

815 

816 2011-03-15 

817 

818 The storage format can be customized to some degree using the 

819 ``storage_format`` and ``regexp`` parameters, such as:: 

820 

821 import re 

822 from sqlalchemy.dialects.sqlite import DATE 

823 

824 d = DATE( 

825 storage_format="%(month)02d/%(day)02d/%(year)04d", 

826 regexp=re.compile("(?P<month>\d+)/(?P<day>\d+)/(?P<year>\d+)") 

827 ) 

828 

829 :param storage_format: format string which will be applied to the 

830 dict with keys year, month, and day. 

831 

832 :param regexp: regular expression which will be applied to 

833 incoming result rows. If the regexp contains named groups, the 

834 resulting match dict is applied to the Python date() constructor 

835 as keyword arguments. Otherwise, if positional groups are used, the 

836 date() constructor is called with positional arguments via 

837 ``*map(int, match_obj.groups(0))``. 

838 """ 

839 

840 _storage_format = "%(year)04d-%(month)02d-%(day)02d" 

841 

842 def bind_processor(self, dialect): 

843 datetime_date = datetime.date 

844 format_ = self._storage_format 

845 

846 def process(value): 

847 if value is None: 

848 return None 

849 elif isinstance(value, datetime_date): 

850 return format_ % { 

851 "year": value.year, 

852 "month": value.month, 

853 "day": value.day, 

854 } 

855 else: 

856 raise TypeError( 

857 "SQLite Date type only accepts Python " 

858 "date objects as input." 

859 ) 

860 

861 return process 

862 

863 def result_processor(self, dialect, coltype): 

864 if self._reg: 

865 return processors.str_to_datetime_processor_factory( 

866 self._reg, datetime.date 

867 ) 

868 else: 

869 return processors.str_to_date 

870 

871 

872class TIME(_DateTimeMixin, sqltypes.Time): 

873 r"""Represent a Python time object in SQLite using a string. 

874 

875 The default string storage format is:: 

876 

877 "%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d" 

878 

879 e.g.:: 

880 

881 12:05:57.10558 

882 

883 The storage format can be customized to some degree using the 

884 ``storage_format`` and ``regexp`` parameters, such as:: 

885 

886 import re 

887 from sqlalchemy.dialects.sqlite import TIME 

888 

889 t = TIME(storage_format="%(hour)02d-%(minute)02d-" 

890 "%(second)02d-%(microsecond)06d", 

891 regexp=re.compile("(\d+)-(\d+)-(\d+)-(?:-(\d+))?") 

892 ) 

893 

894 :param storage_format: format string which will be applied to the dict 

895 with keys hour, minute, second, and microsecond. 

896 

897 :param regexp: regular expression which will be applied to incoming result 

898 rows. If the regexp contains named groups, the resulting match dict is 

899 applied to the Python time() constructor as keyword arguments. Otherwise, 

900 if positional groups are used, the time() constructor is called with 

901 positional arguments via ``*map(int, match_obj.groups(0))``. 

902 """ 

903 

904 _storage_format = "%(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d" 

905 

906 def __init__(self, *args, **kwargs): 

907 truncate_microseconds = kwargs.pop("truncate_microseconds", False) 

908 super(TIME, self).__init__(*args, **kwargs) 

909 if truncate_microseconds: 

910 assert "storage_format" not in kwargs, ( 

911 "You can specify only " 

912 "one of truncate_microseconds or storage_format." 

913 ) 

914 assert "regexp" not in kwargs, ( 

915 "You can specify only one of " 

916 "truncate_microseconds or regexp." 

917 ) 

918 self._storage_format = "%(hour)02d:%(minute)02d:%(second)02d" 

919 

920 def bind_processor(self, dialect): 

921 datetime_time = datetime.time 

922 format_ = self._storage_format 

923 

924 def process(value): 

925 if value is None: 

926 return None 

927 elif isinstance(value, datetime_time): 

928 return format_ % { 

929 "hour": value.hour, 

930 "minute": value.minute, 

931 "second": value.second, 

932 "microsecond": value.microsecond, 

933 } 

934 else: 

935 raise TypeError( 

936 "SQLite Time type only accepts Python " 

937 "time objects as input." 

938 ) 

939 

940 return process 

941 

942 def result_processor(self, dialect, coltype): 

943 if self._reg: 

944 return processors.str_to_datetime_processor_factory( 

945 self._reg, datetime.time 

946 ) 

947 else: 

948 return processors.str_to_time 

949 

950 

951colspecs = { 

952 sqltypes.Date: DATE, 

953 sqltypes.DateTime: DATETIME, 

954 sqltypes.JSON: _SQliteJson, 

955 sqltypes.JSON.JSONIndexType: JSONIndexType, 

956 sqltypes.JSON.JSONPathType: JSONPathType, 

957 sqltypes.Time: TIME, 

958} 

959 

960ischema_names = { 

961 "BIGINT": sqltypes.BIGINT, 

962 "BLOB": sqltypes.BLOB, 

963 "BOOL": sqltypes.BOOLEAN, 

964 "BOOLEAN": sqltypes.BOOLEAN, 

965 "CHAR": sqltypes.CHAR, 

966 "DATE": sqltypes.DATE, 

967 "DATE_CHAR": sqltypes.DATE, 

968 "DATETIME": sqltypes.DATETIME, 

969 "DATETIME_CHAR": sqltypes.DATETIME, 

970 "DOUBLE": sqltypes.FLOAT, 

971 "DECIMAL": sqltypes.DECIMAL, 

972 "FLOAT": sqltypes.FLOAT, 

973 "INT": sqltypes.INTEGER, 

974 "INTEGER": sqltypes.INTEGER, 

975 "JSON": JSON, 

976 "NUMERIC": sqltypes.NUMERIC, 

977 "REAL": sqltypes.REAL, 

978 "SMALLINT": sqltypes.SMALLINT, 

979 "TEXT": sqltypes.TEXT, 

980 "TIME": sqltypes.TIME, 

981 "TIME_CHAR": sqltypes.TIME, 

982 "TIMESTAMP": sqltypes.TIMESTAMP, 

983 "VARCHAR": sqltypes.VARCHAR, 

984 "NVARCHAR": sqltypes.NVARCHAR, 

985 "NCHAR": sqltypes.NCHAR, 

986} 

987 

988 

989class SQLiteCompiler(compiler.SQLCompiler): 

990 extract_map = util.update_copy( 

991 compiler.SQLCompiler.extract_map, 

992 { 

993 "month": "%m", 

994 "day": "%d", 

995 "year": "%Y", 

996 "second": "%S", 

997 "hour": "%H", 

998 "doy": "%j", 

999 "minute": "%M", 

1000 "epoch": "%s", 

1001 "dow": "%w", 

1002 "week": "%W", 

1003 }, 

1004 ) 

1005 

1006 def visit_now_func(self, fn, **kw): 

1007 return "CURRENT_TIMESTAMP" 

1008 

1009 def visit_localtimestamp_func(self, func, **kw): 

1010 return 'DATETIME(CURRENT_TIMESTAMP, "localtime")' 

1011 

1012 def visit_true(self, expr, **kw): 

1013 return "1" 

1014 

1015 def visit_false(self, expr, **kw): 

1016 return "0" 

1017 

1018 def visit_char_length_func(self, fn, **kw): 

1019 return "length%s" % self.function_argspec(fn) 

1020 

1021 def visit_cast(self, cast, **kwargs): 

1022 if self.dialect.supports_cast: 

1023 return super(SQLiteCompiler, self).visit_cast(cast, **kwargs) 

1024 else: 

1025 return self.process(cast.clause, **kwargs) 

1026 

1027 def visit_extract(self, extract, **kw): 

1028 try: 

1029 return "CAST(STRFTIME('%s', %s) AS INTEGER)" % ( 

1030 self.extract_map[extract.field], 

1031 self.process(extract.expr, **kw), 

1032 ) 

1033 except KeyError as err: 

1034 util.raise_( 

1035 exc.CompileError( 

1036 "%s is not a valid extract argument." % extract.field 

1037 ), 

1038 replace_context=err, 

1039 ) 

1040 

1041 def limit_clause(self, select, **kw): 

1042 text = "" 

1043 if select._limit_clause is not None: 

1044 text += "\n LIMIT " + self.process(select._limit_clause, **kw) 

1045 if select._offset_clause is not None: 

1046 if select._limit_clause is None: 

1047 text += "\n LIMIT " + self.process(sql.literal(-1)) 

1048 text += " OFFSET " + self.process(select._offset_clause, **kw) 

1049 else: 

1050 text += " OFFSET " + self.process(sql.literal(0), **kw) 

1051 return text 

1052 

1053 def for_update_clause(self, select, **kw): 

1054 # sqlite has no "FOR UPDATE" AFAICT 

1055 return "" 

1056 

1057 def visit_is_distinct_from_binary(self, binary, operator, **kw): 

1058 return "%s IS NOT %s" % ( 

1059 self.process(binary.left), 

1060 self.process(binary.right), 

1061 ) 

1062 

1063 def visit_isnot_distinct_from_binary(self, binary, operator, **kw): 

1064 return "%s IS %s" % ( 

1065 self.process(binary.left), 

1066 self.process(binary.right), 

1067 ) 

1068 

1069 def visit_json_getitem_op_binary(self, binary, operator, **kw): 

1070 if binary.type._type_affinity is sqltypes.JSON: 

1071 expr = "JSON_QUOTE(JSON_EXTRACT(%s, %s))" 

1072 else: 

1073 expr = "JSON_EXTRACT(%s, %s)" 

1074 

1075 return expr % ( 

1076 self.process(binary.left, **kw), 

1077 self.process(binary.right, **kw), 

1078 ) 

1079 

1080 def visit_json_path_getitem_op_binary(self, binary, operator, **kw): 

1081 if binary.type._type_affinity is sqltypes.JSON: 

1082 expr = "JSON_QUOTE(JSON_EXTRACT(%s, %s))" 

1083 else: 

1084 expr = "JSON_EXTRACT(%s, %s)" 

1085 

1086 return expr % ( 

1087 self.process(binary.left, **kw), 

1088 self.process(binary.right, **kw), 

1089 ) 

1090 

1091 def visit_empty_set_expr(self, element_types): 

1092 return "SELECT %s FROM (SELECT %s) WHERE 1!=1" % ( 

1093 ", ".join("1" for type_ in element_types or [INTEGER()]), 

1094 ", ".join("1" for type_ in element_types or [INTEGER()]), 

1095 ) 

1096 

1097 

1098class SQLiteDDLCompiler(compiler.DDLCompiler): 

1099 def get_column_specification(self, column, **kwargs): 

1100 

1101 coltype = self.dialect.type_compiler.process( 

1102 column.type, type_expression=column 

1103 ) 

1104 colspec = self.preparer.format_column(column) + " " + coltype 

1105 default = self.get_column_default_string(column) 

1106 if default is not None: 

1107 if isinstance(column.server_default.arg, ColumnElement): 

1108 default = "(" + default + ")" 

1109 colspec += " DEFAULT " + default 

1110 

1111 if not column.nullable: 

1112 colspec += " NOT NULL" 

1113 

1114 on_conflict_clause = column.dialect_options["sqlite"][ 

1115 "on_conflict_not_null" 

1116 ] 

1117 if on_conflict_clause is not None: 

1118 colspec += " ON CONFLICT " + on_conflict_clause 

1119 

1120 if column.primary_key: 

1121 if ( 

1122 column.autoincrement is True 

1123 and len(column.table.primary_key.columns) != 1 

1124 ): 

1125 raise exc.CompileError( 

1126 "SQLite does not support autoincrement for " 

1127 "composite primary keys" 

1128 ) 

1129 

1130 if ( 

1131 column.table.dialect_options["sqlite"]["autoincrement"] 

1132 and len(column.table.primary_key.columns) == 1 

1133 and issubclass(column.type._type_affinity, sqltypes.Integer) 

1134 and not column.foreign_keys 

1135 ): 

1136 colspec += " PRIMARY KEY" 

1137 

1138 on_conflict_clause = column.dialect_options["sqlite"][ 

1139 "on_conflict_primary_key" 

1140 ] 

1141 if on_conflict_clause is not None: 

1142 colspec += " ON CONFLICT " + on_conflict_clause 

1143 

1144 colspec += " AUTOINCREMENT" 

1145 

1146 if column.computed is not None: 

1147 colspec += " " + self.process(column.computed) 

1148 

1149 return colspec 

1150 

1151 def visit_primary_key_constraint(self, constraint): 

1152 # for columns with sqlite_autoincrement=True, 

1153 # the PRIMARY KEY constraint can only be inline 

1154 # with the column itself. 

1155 if len(constraint.columns) == 1: 

1156 c = list(constraint)[0] 

1157 if ( 

1158 c.primary_key 

1159 and c.table.dialect_options["sqlite"]["autoincrement"] 

1160 and issubclass(c.type._type_affinity, sqltypes.Integer) 

1161 and not c.foreign_keys 

1162 ): 

1163 return None 

1164 

1165 text = super(SQLiteDDLCompiler, self).visit_primary_key_constraint( 

1166 constraint 

1167 ) 

1168 

1169 on_conflict_clause = constraint.dialect_options["sqlite"][ 

1170 "on_conflict" 

1171 ] 

1172 if on_conflict_clause is None and len(constraint.columns) == 1: 

1173 on_conflict_clause = list(constraint)[0].dialect_options["sqlite"][ 

1174 "on_conflict_primary_key" 

1175 ] 

1176 

1177 if on_conflict_clause is not None: 

1178 text += " ON CONFLICT " + on_conflict_clause 

1179 

1180 return text 

1181 

1182 def visit_unique_constraint(self, constraint): 

1183 text = super(SQLiteDDLCompiler, self).visit_unique_constraint( 

1184 constraint 

1185 ) 

1186 

1187 on_conflict_clause = constraint.dialect_options["sqlite"][ 

1188 "on_conflict" 

1189 ] 

1190 if on_conflict_clause is None and len(constraint.columns) == 1: 

1191 col1 = list(constraint)[0] 

1192 if isinstance(col1, schema.SchemaItem): 

1193 on_conflict_clause = list(constraint)[0].dialect_options[ 

1194 "sqlite" 

1195 ]["on_conflict_unique"] 

1196 

1197 if on_conflict_clause is not None: 

1198 text += " ON CONFLICT " + on_conflict_clause 

1199 

1200 return text 

1201 

1202 def visit_check_constraint(self, constraint): 

1203 text = super(SQLiteDDLCompiler, self).visit_check_constraint( 

1204 constraint 

1205 ) 

1206 

1207 on_conflict_clause = constraint.dialect_options["sqlite"][ 

1208 "on_conflict" 

1209 ] 

1210 

1211 if on_conflict_clause is not None: 

1212 text += " ON CONFLICT " + on_conflict_clause 

1213 

1214 return text 

1215 

1216 def visit_column_check_constraint(self, constraint): 

1217 text = super(SQLiteDDLCompiler, self).visit_column_check_constraint( 

1218 constraint 

1219 ) 

1220 

1221 if constraint.dialect_options["sqlite"]["on_conflict"] is not None: 

1222 raise exc.CompileError( 

1223 "SQLite does not support on conflict clause for " 

1224 "column check constraint" 

1225 ) 

1226 

1227 return text 

1228 

1229 def visit_foreign_key_constraint(self, constraint): 

1230 

1231 local_table = constraint.elements[0].parent.table 

1232 remote_table = constraint.elements[0].column.table 

1233 

1234 if local_table.schema != remote_table.schema: 

1235 return None 

1236 else: 

1237 return super(SQLiteDDLCompiler, self).visit_foreign_key_constraint( 

1238 constraint 

1239 ) 

1240 

1241 def define_constraint_remote_table(self, constraint, table, preparer): 

1242 """Format the remote table clause of a CREATE CONSTRAINT clause.""" 

1243 

1244 return preparer.format_table(table, use_schema=False) 

1245 

1246 def visit_create_index( 

1247 self, create, include_schema=False, include_table_schema=True 

1248 ): 

1249 index = create.element 

1250 self._verify_index_table(index) 

1251 preparer = self.preparer 

1252 text = "CREATE " 

1253 if index.unique: 

1254 text += "UNIQUE " 

1255 text += "INDEX %s ON %s (%s)" % ( 

1256 self._prepared_index_name(index, include_schema=True), 

1257 preparer.format_table(index.table, use_schema=False), 

1258 ", ".join( 

1259 self.sql_compiler.process( 

1260 expr, include_table=False, literal_binds=True 

1261 ) 

1262 for expr in index.expressions 

1263 ), 

1264 ) 

1265 

1266 whereclause = index.dialect_options["sqlite"]["where"] 

1267 if whereclause is not None: 

1268 where_compiled = self.sql_compiler.process( 

1269 whereclause, include_table=False, literal_binds=True 

1270 ) 

1271 text += " WHERE " + where_compiled 

1272 

1273 return text 

1274 

1275 def post_create_table(self, table): 

1276 if table.dialect_options["sqlite"]["with_rowid"] is False: 

1277 return "\n WITHOUT ROWID" 

1278 return "" 

1279 

1280 

1281class SQLiteTypeCompiler(compiler.GenericTypeCompiler): 

1282 def visit_large_binary(self, type_, **kw): 

1283 return self.visit_BLOB(type_) 

1284 

1285 def visit_DATETIME(self, type_, **kw): 

1286 if ( 

1287 not isinstance(type_, _DateTimeMixin) 

1288 or type_.format_is_text_affinity 

1289 ): 

1290 return super(SQLiteTypeCompiler, self).visit_DATETIME(type_) 

1291 else: 

1292 return "DATETIME_CHAR" 

1293 

1294 def visit_DATE(self, type_, **kw): 

1295 if ( 

1296 not isinstance(type_, _DateTimeMixin) 

1297 or type_.format_is_text_affinity 

1298 ): 

1299 return super(SQLiteTypeCompiler, self).visit_DATE(type_) 

1300 else: 

1301 return "DATE_CHAR" 

1302 

1303 def visit_TIME(self, type_, **kw): 

1304 if ( 

1305 not isinstance(type_, _DateTimeMixin) 

1306 or type_.format_is_text_affinity 

1307 ): 

1308 return super(SQLiteTypeCompiler, self).visit_TIME(type_) 

1309 else: 

1310 return "TIME_CHAR" 

1311 

1312 def visit_JSON(self, type_, **kw): 

1313 # note this name provides NUMERIC affinity, not TEXT. 

1314 # should not be an issue unless the JSON value consists of a single 

1315 # numeric value. JSONTEXT can be used if this case is required. 

1316 return "JSON" 

1317 

1318 

1319class SQLiteIdentifierPreparer(compiler.IdentifierPreparer): 

1320 reserved_words = set( 

1321 [ 

1322 "add", 

1323 "after", 

1324 "all", 

1325 "alter", 

1326 "analyze", 

1327 "and", 

1328 "as", 

1329 "asc", 

1330 "attach", 

1331 "autoincrement", 

1332 "before", 

1333 "begin", 

1334 "between", 

1335 "by", 

1336 "cascade", 

1337 "case", 

1338 "cast", 

1339 "check", 

1340 "collate", 

1341 "column", 

1342 "commit", 

1343 "conflict", 

1344 "constraint", 

1345 "create", 

1346 "cross", 

1347 "current_date", 

1348 "current_time", 

1349 "current_timestamp", 

1350 "database", 

1351 "default", 

1352 "deferrable", 

1353 "deferred", 

1354 "delete", 

1355 "desc", 

1356 "detach", 

1357 "distinct", 

1358 "drop", 

1359 "each", 

1360 "else", 

1361 "end", 

1362 "escape", 

1363 "except", 

1364 "exclusive", 

1365 "exists", 

1366 "explain", 

1367 "false", 

1368 "fail", 

1369 "for", 

1370 "foreign", 

1371 "from", 

1372 "full", 

1373 "glob", 

1374 "group", 

1375 "having", 

1376 "if", 

1377 "ignore", 

1378 "immediate", 

1379 "in", 

1380 "index", 

1381 "indexed", 

1382 "initially", 

1383 "inner", 

1384 "insert", 

1385 "instead", 

1386 "intersect", 

1387 "into", 

1388 "is", 

1389 "isnull", 

1390 "join", 

1391 "key", 

1392 "left", 

1393 "like", 

1394 "limit", 

1395 "match", 

1396 "natural", 

1397 "not", 

1398 "notnull", 

1399 "null", 

1400 "of", 

1401 "offset", 

1402 "on", 

1403 "or", 

1404 "order", 

1405 "outer", 

1406 "plan", 

1407 "pragma", 

1408 "primary", 

1409 "query", 

1410 "raise", 

1411 "references", 

1412 "reindex", 

1413 "rename", 

1414 "replace", 

1415 "restrict", 

1416 "right", 

1417 "rollback", 

1418 "row", 

1419 "select", 

1420 "set", 

1421 "table", 

1422 "temp", 

1423 "temporary", 

1424 "then", 

1425 "to", 

1426 "transaction", 

1427 "trigger", 

1428 "true", 

1429 "union", 

1430 "unique", 

1431 "update", 

1432 "using", 

1433 "vacuum", 

1434 "values", 

1435 "view", 

1436 "virtual", 

1437 "when", 

1438 "where", 

1439 ] 

1440 ) 

1441 

1442 

1443class SQLiteExecutionContext(default.DefaultExecutionContext): 

1444 @util.memoized_property 

1445 def _preserve_raw_colnames(self): 

1446 return ( 

1447 not self.dialect._broken_dotted_colnames 

1448 or self.execution_options.get("sqlite_raw_colnames", False) 

1449 ) 

1450 

1451 def _translate_colname(self, colname): 

1452 # TODO: detect SQLite version 3.10.0 or greater; 

1453 # see [ticket:3633] 

1454 

1455 # adjust for dotted column names. SQLite 

1456 # in the case of UNION may store col names as 

1457 # "tablename.colname", or if using an attached database, 

1458 # "database.tablename.colname", in cursor.description 

1459 if not self._preserve_raw_colnames and "." in colname: 

1460 return colname.split(".")[-1], colname 

1461 else: 

1462 return colname, None 

1463 

1464 

1465class SQLiteDialect(default.DefaultDialect): 

1466 name = "sqlite" 

1467 supports_alter = False 

1468 supports_unicode_statements = True 

1469 supports_unicode_binds = True 

1470 supports_default_values = True 

1471 supports_empty_insert = False 

1472 supports_cast = True 

1473 supports_multivalues_insert = True 

1474 tuple_in_values = True 

1475 

1476 default_paramstyle = "qmark" 

1477 execution_ctx_cls = SQLiteExecutionContext 

1478 statement_compiler = SQLiteCompiler 

1479 ddl_compiler = SQLiteDDLCompiler 

1480 type_compiler = SQLiteTypeCompiler 

1481 preparer = SQLiteIdentifierPreparer 

1482 ischema_names = ischema_names 

1483 colspecs = colspecs 

1484 isolation_level = None 

1485 

1486 construct_arguments = [ 

1487 ( 

1488 sa_schema.Table, 

1489 { 

1490 "autoincrement": False, 

1491 "with_rowid": True, 

1492 }, 

1493 ), 

1494 (sa_schema.Index, {"where": None}), 

1495 ( 

1496 sa_schema.Column, 

1497 { 

1498 "on_conflict_primary_key": None, 

1499 "on_conflict_not_null": None, 

1500 "on_conflict_unique": None, 

1501 }, 

1502 ), 

1503 (sa_schema.Constraint, {"on_conflict": None}), 

1504 ] 

1505 

1506 _broken_fk_pragma_quotes = False 

1507 _broken_dotted_colnames = False 

1508 

1509 @util.deprecated_params( 

1510 _json_serializer=( 

1511 "1.3.7", 

1512 "The _json_serializer argument to the SQLite dialect has " 

1513 "been renamed to the correct name of json_serializer. The old " 

1514 "argument name will be removed in a future release.", 

1515 ), 

1516 _json_deserializer=( 

1517 "1.3.7", 

1518 "The _json_deserializer argument to the SQLite dialect has " 

1519 "been renamed to the correct name of json_deserializer. The old " 

1520 "argument name will be removed in a future release.", 

1521 ), 

1522 ) 

1523 def __init__( 

1524 self, 

1525 isolation_level=None, 

1526 native_datetime=False, 

1527 json_serializer=None, 

1528 json_deserializer=None, 

1529 _json_serializer=None, 

1530 _json_deserializer=None, 

1531 **kwargs 

1532 ): 

1533 default.DefaultDialect.__init__(self, **kwargs) 

1534 self.isolation_level = isolation_level 

1535 

1536 if _json_serializer: 

1537 json_serializer = _json_serializer 

1538 if _json_deserializer: 

1539 json_deserializer = _json_deserializer 

1540 self._json_serializer = json_serializer 

1541 self._json_deserializer = json_deserializer 

1542 

1543 # this flag used by pysqlite dialect, and perhaps others in the 

1544 # future, to indicate the driver is handling date/timestamp 

1545 # conversions (and perhaps datetime/time as well on some hypothetical 

1546 # driver ?) 

1547 self.native_datetime = native_datetime 

1548 

1549 if self.dbapi is not None: 

1550 self.supports_right_nested_joins = ( 

1551 self.dbapi.sqlite_version_info >= (3, 7, 16) 

1552 ) 

1553 self._broken_dotted_colnames = self.dbapi.sqlite_version_info < ( 

1554 3, 

1555 10, 

1556 0, 

1557 ) 

1558 self.supports_default_values = self.dbapi.sqlite_version_info >= ( 

1559 3, 

1560 3, 

1561 8, 

1562 ) 

1563 self.supports_cast = self.dbapi.sqlite_version_info >= (3, 2, 3) 

1564 self.supports_multivalues_insert = ( 

1565 # http://www.sqlite.org/releaselog/3_7_11.html 

1566 self.dbapi.sqlite_version_info 

1567 >= (3, 7, 11) 

1568 ) 

1569 # see http://www.sqlalchemy.org/trac/ticket/2568 

1570 # as well as http://www.sqlite.org/src/info/600482d161 

1571 self._broken_fk_pragma_quotes = self.dbapi.sqlite_version_info < ( 

1572 3, 

1573 6, 

1574 14, 

1575 ) 

1576 

1577 _isolation_lookup = {"READ UNCOMMITTED": 1, "SERIALIZABLE": 0} 

1578 

1579 def set_isolation_level(self, connection, level): 

1580 try: 

1581 isolation_level = self._isolation_lookup[level.replace("_", " ")] 

1582 except KeyError as err: 

1583 util.raise_( 

1584 exc.ArgumentError( 

1585 "Invalid value '%s' for isolation_level. " 

1586 "Valid isolation levels for %s are %s" 

1587 % (level, self.name, ", ".join(self._isolation_lookup)) 

1588 ), 

1589 replace_context=err, 

1590 ) 

1591 cursor = connection.cursor() 

1592 cursor.execute("PRAGMA read_uncommitted = %d" % isolation_level) 

1593 cursor.close() 

1594 

1595 def get_isolation_level(self, connection): 

1596 cursor = connection.cursor() 

1597 cursor.execute("PRAGMA read_uncommitted") 

1598 res = cursor.fetchone() 

1599 if res: 

1600 value = res[0] 

1601 else: 

1602 # http://www.sqlite.org/changes.html#version_3_3_3 

1603 # "Optional READ UNCOMMITTED isolation (instead of the 

1604 # default isolation level of SERIALIZABLE) and 

1605 # table level locking when database connections 

1606 # share a common cache."" 

1607 # pre-SQLite 3.3.0 default to 0 

1608 value = 0 

1609 cursor.close() 

1610 if value == 0: 

1611 return "SERIALIZABLE" 

1612 elif value == 1: 

1613 return "READ UNCOMMITTED" 

1614 else: 

1615 assert False, "Unknown isolation level %s" % value 

1616 

1617 def on_connect(self): 

1618 if self.isolation_level is not None: 

1619 

1620 def connect(conn): 

1621 self.set_isolation_level(conn, self.isolation_level) 

1622 

1623 return connect 

1624 else: 

1625 return None 

1626 

1627 @reflection.cache 

1628 def get_schema_names(self, connection, **kw): 

1629 s = "PRAGMA database_list" 

1630 dl = connection.execute(s) 

1631 

1632 return [db[1] for db in dl if db[1] != "temp"] 

1633 

1634 @reflection.cache 

1635 def get_table_names(self, connection, schema=None, **kw): 

1636 if schema is not None: 

1637 qschema = self.identifier_preparer.quote_identifier(schema) 

1638 master = "%s.sqlite_master" % qschema 

1639 else: 

1640 master = "sqlite_master" 

1641 s = ("SELECT name FROM %s " "WHERE type='table' ORDER BY name") % ( 

1642 master, 

1643 ) 

1644 rs = connection.execute(s) 

1645 return [row[0] for row in rs] 

1646 

1647 @reflection.cache 

1648 def get_temp_table_names(self, connection, **kw): 

1649 s = ( 

1650 "SELECT name FROM sqlite_temp_master " 

1651 "WHERE type='table' ORDER BY name " 

1652 ) 

1653 rs = connection.execute(s) 

1654 

1655 return [row[0] for row in rs] 

1656 

1657 @reflection.cache 

1658 def get_temp_view_names(self, connection, **kw): 

1659 s = ( 

1660 "SELECT name FROM sqlite_temp_master " 

1661 "WHERE type='view' ORDER BY name " 

1662 ) 

1663 rs = connection.execute(s) 

1664 

1665 return [row[0] for row in rs] 

1666 

1667 def has_table(self, connection, table_name, schema=None): 

1668 info = self._get_table_pragma( 

1669 connection, "table_info", table_name, schema=schema 

1670 ) 

1671 return bool(info) 

1672 

1673 @reflection.cache 

1674 def get_view_names(self, connection, schema=None, **kw): 

1675 if schema is not None: 

1676 qschema = self.identifier_preparer.quote_identifier(schema) 

1677 master = "%s.sqlite_master" % qschema 

1678 else: 

1679 master = "sqlite_master" 

1680 s = ("SELECT name FROM %s " "WHERE type='view' ORDER BY name") % ( 

1681 master, 

1682 ) 

1683 rs = connection.execute(s) 

1684 

1685 return [row[0] for row in rs] 

1686 

1687 @reflection.cache 

1688 def get_view_definition(self, connection, view_name, schema=None, **kw): 

1689 if schema is not None: 

1690 qschema = self.identifier_preparer.quote_identifier(schema) 

1691 master = "%s.sqlite_master" % qschema 

1692 s = ("SELECT sql FROM %s WHERE name = ? AND type='view'") % ( 

1693 master, 

1694 ) 

1695 rs = connection.execute(s, (view_name,)) 

1696 else: 

1697 try: 

1698 s = ( 

1699 "SELECT sql FROM " 

1700 " (SELECT * FROM sqlite_master UNION ALL " 

1701 " SELECT * FROM sqlite_temp_master) " 

1702 "WHERE name = ? " 

1703 "AND type='view'" 

1704 ) 

1705 rs = connection.execute(s, (view_name,)) 

1706 except exc.DBAPIError: 

1707 s = ( 

1708 "SELECT sql FROM sqlite_master WHERE name = ? " 

1709 "AND type='view'" 

1710 ) 

1711 rs = connection.execute(s, (view_name,)) 

1712 

1713 result = rs.fetchall() 

1714 if result: 

1715 return result[0].sql 

1716 

1717 @reflection.cache 

1718 def get_columns(self, connection, table_name, schema=None, **kw): 

1719 pragma = "table_info" 

1720 # computed columns are threaded as hidden, they require table_xinfo 

1721 if self.server_version_info >= (3, 31): 

1722 pragma = "table_xinfo" 

1723 info = self._get_table_pragma( 

1724 connection, pragma, table_name, schema=schema 

1725 ) 

1726 columns = [] 

1727 tablesql = None 

1728 for row in info: 

1729 name = row[1] 

1730 type_ = row[2].upper() 

1731 nullable = not row[3] 

1732 default = row[4] 

1733 primary_key = row[5] 

1734 hidden = row[6] if pragma == "table_xinfo" else 0 

1735 

1736 # hidden has value 0 for normal columns, 1 for hidden columns, 

1737 # 2 for computed virtual columns and 3 for computed stored columns 

1738 # https://www.sqlite.org/src/info/069351b85f9a706f60d3e98fbc8aaf40c374356b967c0464aede30ead3d9d18b 

1739 if hidden == 1: 

1740 continue 

1741 

1742 generated = bool(hidden) 

1743 persisted = hidden == 3 

1744 

1745 if tablesql is None and generated: 

1746 tablesql = self._get_table_sql( 

1747 connection, table_name, schema, **kw 

1748 ) 

1749 

1750 columns.append( 

1751 self._get_column_info( 

1752 name, 

1753 type_, 

1754 nullable, 

1755 default, 

1756 primary_key, 

1757 generated, 

1758 persisted, 

1759 tablesql, 

1760 ) 

1761 ) 

1762 return columns 

1763 

1764 def _get_column_info( 

1765 self, 

1766 name, 

1767 type_, 

1768 nullable, 

1769 default, 

1770 primary_key, 

1771 generated, 

1772 persisted, 

1773 tablesql, 

1774 ): 

1775 

1776 if generated: 

1777 # the type of a column "cc INTEGER GENERATED ALWAYS AS (1 + 42)" 

1778 # somehow is "INTEGER GENERATED ALWAYS" 

1779 type_ = re.sub("generated", "", type_, flags=re.IGNORECASE) 

1780 type_ = re.sub("always", "", type_, flags=re.IGNORECASE).strip() 

1781 

1782 coltype = self._resolve_type_affinity(type_) 

1783 

1784 if default is not None: 

1785 default = util.text_type(default) 

1786 

1787 colspec = { 

1788 "name": name, 

1789 "type": coltype, 

1790 "nullable": nullable, 

1791 "default": default, 

1792 "autoincrement": "auto", 

1793 "primary_key": primary_key, 

1794 } 

1795 if generated: 

1796 sqltext = "" 

1797 if tablesql: 

1798 pattern = r"[^,]*\s+AS\s+\(([^,]*)\)\s*(?:virtual|stored)?" 

1799 match = re.search( 

1800 re.escape(name) + pattern, tablesql, re.IGNORECASE 

1801 ) 

1802 if match: 

1803 sqltext = match.group(1) 

1804 colspec["computed"] = {"sqltext": sqltext, "persisted": persisted} 

1805 return colspec 

1806 

1807 def _resolve_type_affinity(self, type_): 

1808 """Return a data type from a reflected column, using affinity tules. 

1809 

1810 SQLite's goal for universal compatibility introduces some complexity 

1811 during reflection, as a column's defined type might not actually be a 

1812 type that SQLite understands - or indeed, my not be defined *at all*. 

1813 Internally, SQLite handles this with a 'data type affinity' for each 

1814 column definition, mapping to one of 'TEXT', 'NUMERIC', 'INTEGER', 

1815 'REAL', or 'NONE' (raw bits). The algorithm that determines this is 

1816 listed in http://www.sqlite.org/datatype3.html section 2.1. 

1817 

1818 This method allows SQLAlchemy to support that algorithm, while still 

1819 providing access to smarter reflection utilities by regcognizing 

1820 column definitions that SQLite only supports through affinity (like 

1821 DATE and DOUBLE). 

1822 

1823 """ 

1824 match = re.match(r"([\w ]+)(\(.*?\))?", type_) 

1825 if match: 

1826 coltype = match.group(1) 

1827 args = match.group(2) 

1828 else: 

1829 coltype = "" 

1830 args = "" 

1831 

1832 if coltype in self.ischema_names: 

1833 coltype = self.ischema_names[coltype] 

1834 elif "INT" in coltype: 

1835 coltype = sqltypes.INTEGER 

1836 elif "CHAR" in coltype or "CLOB" in coltype or "TEXT" in coltype: 

1837 coltype = sqltypes.TEXT 

1838 elif "BLOB" in coltype or not coltype: 

1839 coltype = sqltypes.NullType 

1840 elif "REAL" in coltype or "FLOA" in coltype or "DOUB" in coltype: 

1841 coltype = sqltypes.REAL 

1842 else: 

1843 coltype = sqltypes.NUMERIC 

1844 

1845 if args is not None: 

1846 args = re.findall(r"(\d+)", args) 

1847 try: 

1848 coltype = coltype(*[int(a) for a in args]) 

1849 except TypeError: 

1850 util.warn( 

1851 "Could not instantiate type %s with " 

1852 "reflected arguments %s; using no arguments." 

1853 % (coltype, args) 

1854 ) 

1855 coltype = coltype() 

1856 else: 

1857 coltype = coltype() 

1858 

1859 return coltype 

1860 

1861 @reflection.cache 

1862 def get_pk_constraint(self, connection, table_name, schema=None, **kw): 

1863 constraint_name = None 

1864 table_data = self._get_table_sql(connection, table_name, schema=schema) 

1865 if table_data: 

1866 PK_PATTERN = r"CONSTRAINT (\w+) PRIMARY KEY" 

1867 result = re.search(PK_PATTERN, table_data, re.I) 

1868 constraint_name = result.group(1) if result else None 

1869 

1870 cols = self.get_columns(connection, table_name, schema, **kw) 

1871 cols.sort(key=lambda col: col.get("primary_key")) 

1872 pkeys = [] 

1873 for col in cols: 

1874 if col["primary_key"]: 

1875 pkeys.append(col["name"]) 

1876 

1877 return {"constrained_columns": pkeys, "name": constraint_name} 

1878 

1879 @reflection.cache 

1880 def get_foreign_keys(self, connection, table_name, schema=None, **kw): 

1881 # sqlite makes this *extremely difficult*. 

1882 # First, use the pragma to get the actual FKs. 

1883 pragma_fks = self._get_table_pragma( 

1884 connection, "foreign_key_list", table_name, schema=schema 

1885 ) 

1886 

1887 fks = {} 

1888 

1889 for row in pragma_fks: 

1890 (numerical_id, rtbl, lcol, rcol) = (row[0], row[2], row[3], row[4]) 

1891 

1892 if not rcol: 

1893 # no referred column, which means it was not named in the 

1894 # original DDL. The referred columns of the foreign key 

1895 # constraint are therefore the primary key of the referred 

1896 # table. 

1897 referred_pk = self.get_pk_constraint( 

1898 connection, rtbl, schema=schema, **kw 

1899 ) 

1900 # note that if table doesnt exist, we still get back a record, 

1901 # just it has no columns in it 

1902 referred_columns = referred_pk["constrained_columns"] 

1903 else: 

1904 # note we use this list only if this is the first column 

1905 # in the constraint. for subsequent columns we ignore the 

1906 # list and append "rcol" if present. 

1907 referred_columns = [] 

1908 

1909 if self._broken_fk_pragma_quotes: 

1910 rtbl = re.sub(r"^[\"\[`\']|[\"\]`\']$", "", rtbl) 

1911 

1912 if numerical_id in fks: 

1913 fk = fks[numerical_id] 

1914 else: 

1915 fk = fks[numerical_id] = { 

1916 "name": None, 

1917 "constrained_columns": [], 

1918 "referred_schema": schema, 

1919 "referred_table": rtbl, 

1920 "referred_columns": referred_columns, 

1921 "options": {}, 

1922 } 

1923 fks[numerical_id] = fk 

1924 

1925 fk["constrained_columns"].append(lcol) 

1926 

1927 if rcol: 

1928 fk["referred_columns"].append(rcol) 

1929 

1930 def fk_sig(constrained_columns, referred_table, referred_columns): 

1931 return ( 

1932 tuple(constrained_columns) 

1933 + (referred_table,) 

1934 + tuple(referred_columns) 

1935 ) 

1936 

1937 # then, parse the actual SQL and attempt to find DDL that matches 

1938 # the names as well. SQLite saves the DDL in whatever format 

1939 # it was typed in as, so need to be liberal here. 

1940 

1941 keys_by_signature = dict( 

1942 ( 

1943 fk_sig( 

1944 fk["constrained_columns"], 

1945 fk["referred_table"], 

1946 fk["referred_columns"], 

1947 ), 

1948 fk, 

1949 ) 

1950 for fk in fks.values() 

1951 ) 

1952 

1953 table_data = self._get_table_sql(connection, table_name, schema=schema) 

1954 if table_data is None: 

1955 # system tables, etc. 

1956 return [] 

1957 

1958 def parse_fks(): 

1959 FK_PATTERN = ( 

1960 r"(?:CONSTRAINT (\w+) +)?" 

1961 r"FOREIGN KEY *\( *(.+?) *\) +" 

1962 r'REFERENCES +(?:(?:"(.+?)")|([a-z0-9_]+)) *\((.+?)\) *' 

1963 r"((?:ON (?:DELETE|UPDATE) " 

1964 r"(?:SET NULL|SET DEFAULT|CASCADE|RESTRICT|NO ACTION) *)*)" 

1965 ) 

1966 for match in re.finditer(FK_PATTERN, table_data, re.I): 

1967 ( 

1968 constraint_name, 

1969 constrained_columns, 

1970 referred_quoted_name, 

1971 referred_name, 

1972 referred_columns, 

1973 onupdatedelete, 

1974 ) = match.group(1, 2, 3, 4, 5, 6) 

1975 constrained_columns = list( 

1976 self._find_cols_in_sig(constrained_columns) 

1977 ) 

1978 if not referred_columns: 

1979 referred_columns = constrained_columns 

1980 else: 

1981 referred_columns = list( 

1982 self._find_cols_in_sig(referred_columns) 

1983 ) 

1984 referred_name = referred_quoted_name or referred_name 

1985 options = {} 

1986 

1987 for token in re.split(r" *\bON\b *", onupdatedelete.upper()): 

1988 if token.startswith("DELETE"): 

1989 options["ondelete"] = token[6:].strip() 

1990 elif token.startswith("UPDATE"): 

1991 options["onupdate"] = token[6:].strip() 

1992 yield ( 

1993 constraint_name, 

1994 constrained_columns, 

1995 referred_name, 

1996 referred_columns, 

1997 options, 

1998 ) 

1999 

2000 fkeys = [] 

2001 

2002 for ( 

2003 constraint_name, 

2004 constrained_columns, 

2005 referred_name, 

2006 referred_columns, 

2007 options, 

2008 ) in parse_fks(): 

2009 sig = fk_sig(constrained_columns, referred_name, referred_columns) 

2010 if sig not in keys_by_signature: 

2011 util.warn( 

2012 "WARNING: SQL-parsed foreign key constraint " 

2013 "'%s' could not be located in PRAGMA " 

2014 "foreign_keys for table %s" % (sig, table_name) 

2015 ) 

2016 continue 

2017 key = keys_by_signature.pop(sig) 

2018 key["name"] = constraint_name 

2019 key["options"] = options 

2020 fkeys.append(key) 

2021 # assume the remainders are the unnamed, inline constraints, just 

2022 # use them as is as it's extremely difficult to parse inline 

2023 # constraints 

2024 fkeys.extend(keys_by_signature.values()) 

2025 return fkeys 

2026 

2027 def _find_cols_in_sig(self, sig): 

2028 for match in re.finditer(r'(?:"(.+?)")|([a-z0-9_]+)', sig, re.I): 

2029 yield match.group(1) or match.group(2) 

2030 

2031 @reflection.cache 

2032 def get_unique_constraints( 

2033 self, connection, table_name, schema=None, **kw 

2034 ): 

2035 

2036 auto_index_by_sig = {} 

2037 for idx in self.get_indexes( 

2038 connection, 

2039 table_name, 

2040 schema=schema, 

2041 include_auto_indexes=True, 

2042 **kw 

2043 ): 

2044 if not idx["name"].startswith("sqlite_autoindex"): 

2045 continue 

2046 sig = tuple(idx["column_names"]) 

2047 auto_index_by_sig[sig] = idx 

2048 

2049 table_data = self._get_table_sql( 

2050 connection, table_name, schema=schema, **kw 

2051 ) 

2052 if not table_data: 

2053 return [] 

2054 

2055 unique_constraints = [] 

2056 

2057 def parse_uqs(): 

2058 UNIQUE_PATTERN = r'(?:CONSTRAINT "?(.+?)"? +)?UNIQUE *\((.+?)\)' 

2059 INLINE_UNIQUE_PATTERN = ( 

2060 r'(?:(".+?")|([a-z0-9]+)) ' r"+[a-z0-9_ ]+? +UNIQUE" 

2061 ) 

2062 

2063 for match in re.finditer(UNIQUE_PATTERN, table_data, re.I): 

2064 name, cols = match.group(1, 2) 

2065 yield name, list(self._find_cols_in_sig(cols)) 

2066 

2067 # we need to match inlines as well, as we seek to differentiate 

2068 # a UNIQUE constraint from a UNIQUE INDEX, even though these 

2069 # are kind of the same thing :) 

2070 for match in re.finditer(INLINE_UNIQUE_PATTERN, table_data, re.I): 

2071 cols = list( 

2072 self._find_cols_in_sig(match.group(1) or match.group(2)) 

2073 ) 

2074 yield None, cols 

2075 

2076 for name, cols in parse_uqs(): 

2077 sig = tuple(cols) 

2078 if sig in auto_index_by_sig: 

2079 auto_index_by_sig.pop(sig) 

2080 parsed_constraint = {"name": name, "column_names": cols} 

2081 unique_constraints.append(parsed_constraint) 

2082 # NOTE: auto_index_by_sig might not be empty here, 

2083 # the PRIMARY KEY may have an entry. 

2084 return unique_constraints 

2085 

2086 @reflection.cache 

2087 def get_check_constraints(self, connection, table_name, schema=None, **kw): 

2088 table_data = self._get_table_sql( 

2089 connection, table_name, schema=schema, **kw 

2090 ) 

2091 if not table_data: 

2092 return [] 

2093 

2094 CHECK_PATTERN = r"(?:CONSTRAINT (\w+) +)?" r"CHECK *\( *(.+) *\),? *" 

2095 check_constraints = [] 

2096 # NOTE: we aren't using re.S here because we actually are 

2097 # taking advantage of each CHECK constraint being all on one 

2098 # line in the table definition in order to delineate. This 

2099 # necessarily makes assumptions as to how the CREATE TABLE 

2100 # was emitted. 

2101 for match in re.finditer(CHECK_PATTERN, table_data, re.I): 

2102 check_constraints.append( 

2103 {"sqltext": match.group(2), "name": match.group(1)} 

2104 ) 

2105 

2106 return check_constraints 

2107 

2108 @reflection.cache 

2109 def get_indexes(self, connection, table_name, schema=None, **kw): 

2110 pragma_indexes = self._get_table_pragma( 

2111 connection, "index_list", table_name, schema=schema 

2112 ) 

2113 indexes = [] 

2114 

2115 include_auto_indexes = kw.pop("include_auto_indexes", False) 

2116 for row in pragma_indexes: 

2117 # ignore implicit primary key index. 

2118 # http://www.mail-archive.com/sqlite-users@sqlite.org/msg30517.html 

2119 if not include_auto_indexes and row[1].startswith( 

2120 "sqlite_autoindex" 

2121 ): 

2122 continue 

2123 indexes.append(dict(name=row[1], column_names=[], unique=row[2])) 

2124 

2125 # loop thru unique indexes to get the column names. 

2126 for idx in list(indexes): 

2127 pragma_index = self._get_table_pragma( 

2128 connection, "index_info", idx["name"] 

2129 ) 

2130 

2131 for row in pragma_index: 

2132 if row[2] is None: 

2133 util.warn( 

2134 "Skipped unsupported reflection of " 

2135 "expression-based index %s" % idx["name"] 

2136 ) 

2137 indexes.remove(idx) 

2138 break 

2139 else: 

2140 idx["column_names"].append(row[2]) 

2141 return indexes 

2142 

2143 @reflection.cache 

2144 def _get_table_sql(self, connection, table_name, schema=None, **kw): 

2145 if schema: 

2146 schema_expr = "%s." % ( 

2147 self.identifier_preparer.quote_identifier(schema) 

2148 ) 

2149 else: 

2150 schema_expr = "" 

2151 try: 

2152 s = ( 

2153 "SELECT sql FROM " 

2154 " (SELECT * FROM %(schema)ssqlite_master UNION ALL " 

2155 " SELECT * FROM %(schema)ssqlite_temp_master) " 

2156 "WHERE name = ? " 

2157 "AND type = 'table'" % {"schema": schema_expr} 

2158 ) 

2159 rs = connection.execute(s, (table_name,)) 

2160 except exc.DBAPIError: 

2161 s = ( 

2162 "SELECT sql FROM %(schema)ssqlite_master " 

2163 "WHERE name = ? " 

2164 "AND type = 'table'" % {"schema": schema_expr} 

2165 ) 

2166 rs = connection.execute(s, (table_name,)) 

2167 return rs.scalar() 

2168 

2169 def _get_table_pragma(self, connection, pragma, table_name, schema=None): 

2170 quote = self.identifier_preparer.quote_identifier 

2171 if schema is not None: 

2172 statements = ["PRAGMA %s." % quote(schema)] 

2173 else: 

2174 # because PRAGMA looks in all attached databases if no schema 

2175 # given, need to specify "main" schema, however since we want 

2176 # 'temp' tables in the same namespace as 'main', need to run 

2177 # the PRAGMA twice 

2178 statements = ["PRAGMA main.", "PRAGMA temp."] 

2179 

2180 qtable = quote(table_name) 

2181 for statement in statements: 

2182 statement = "%s%s(%s)" % (statement, pragma, qtable) 

2183 cursor = connection.execute(statement) 

2184 if not cursor._soft_closed: 

2185 # work around SQLite issue whereby cursor.description 

2186 # is blank when PRAGMA returns no rows: 

2187 # http://www.sqlite.org/cvstrac/tktview?tn=1884 

2188 result = cursor.fetchall() 

2189 else: 

2190 result = [] 

2191 if result: 

2192 return result 

2193 else: 

2194 return []