Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.11/site-packages/sqlalchemy/dialects/mssql/pyodbc.py: 39%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

179 statements  

1# dialects/mssql/pyodbc.py 

2# Copyright (C) 2005-2025 the SQLAlchemy authors and contributors 

3# <see AUTHORS file> 

4# 

5# This module is part of SQLAlchemy and is released under 

6# the MIT License: https://www.opensource.org/licenses/mit-license.php 

7# mypy: ignore-errors 

8 

9r""" 

10.. dialect:: mssql+pyodbc 

11 :name: PyODBC 

12 :dbapi: pyodbc 

13 :connectstring: mssql+pyodbc://<username>:<password>@<dsnname> 

14 :url: https://pypi.org/project/pyodbc/ 

15 

16Connecting to PyODBC 

17-------------------- 

18 

19The URL here is to be translated to PyODBC connection strings, as 

20detailed in `ConnectionStrings <https://code.google.com/p/pyodbc/wiki/ConnectionStrings>`_. 

21 

22DSN Connections 

23^^^^^^^^^^^^^^^ 

24 

25A DSN connection in ODBC means that a pre-existing ODBC datasource is 

26configured on the client machine. The application then specifies the name 

27of this datasource, which encompasses details such as the specific ODBC driver 

28in use as well as the network address of the database. Assuming a datasource 

29is configured on the client, a basic DSN-based connection looks like:: 

30 

31 engine = create_engine("mssql+pyodbc://scott:tiger@some_dsn") 

32 

33Which above, will pass the following connection string to PyODBC: 

34 

35.. sourcecode:: text 

36 

37 DSN=some_dsn;UID=scott;PWD=tiger 

38 

39If the username and password are omitted, the DSN form will also add 

40the ``Trusted_Connection=yes`` directive to the ODBC string. 

41 

42Hostname Connections 

43^^^^^^^^^^^^^^^^^^^^ 

44 

45Hostname-based connections are also supported by pyodbc. These are often 

46easier to use than a DSN and have the additional advantage that the specific 

47database name to connect towards may be specified locally in the URL, rather 

48than it being fixed as part of a datasource configuration. 

49 

50When using a hostname connection, the driver name must also be specified in the 

51query parameters of the URL. As these names usually have spaces in them, the 

52name must be URL encoded which means using plus signs for spaces:: 

53 

54 engine = create_engine( 

55 "mssql+pyodbc://scott:tiger@myhost:port/databasename?driver=ODBC+Driver+17+for+SQL+Server" 

56 ) 

57 

58The ``driver`` keyword is significant to the pyodbc dialect and must be 

59specified in lowercase. 

60 

61Any other names passed in the query string are passed through in the pyodbc 

62connect string, such as ``authentication``, ``TrustServerCertificate``, etc. 

63Multiple keyword arguments must be separated by an ampersand (``&``); these 

64will be translated to semicolons when the pyodbc connect string is generated 

65internally:: 

66 

67 e = create_engine( 

68 "mssql+pyodbc://scott:tiger@mssql2017:1433/test?" 

69 "driver=ODBC+Driver+18+for+SQL+Server&TrustServerCertificate=yes" 

70 "&authentication=ActiveDirectoryIntegrated" 

71 ) 

72 

73The equivalent URL can be constructed using :class:`_sa.engine.URL`:: 

74 

75 from sqlalchemy.engine import URL 

76 

77 connection_url = URL.create( 

78 "mssql+pyodbc", 

79 username="scott", 

80 password="tiger", 

81 host="mssql2017", 

82 port=1433, 

83 database="test", 

84 query={ 

85 "driver": "ODBC Driver 18 for SQL Server", 

86 "TrustServerCertificate": "yes", 

87 "authentication": "ActiveDirectoryIntegrated", 

88 }, 

89 ) 

90 

91Pass through exact Pyodbc string 

92^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

93 

94A PyODBC connection string can also be sent in pyodbc's format directly, as 

95specified in `the PyODBC documentation 

96<https://github.com/mkleehammer/pyodbc/wiki/Connecting-to-databases>`_, 

97using the parameter ``odbc_connect``. A :class:`_sa.engine.URL` object 

98can help make this easier:: 

99 

100 from sqlalchemy.engine import URL 

101 

102 connection_string = "DRIVER={SQL Server Native Client 10.0};SERVER=dagger;DATABASE=test;UID=user;PWD=password" 

103 connection_url = URL.create( 

104 "mssql+pyodbc", query={"odbc_connect": connection_string} 

105 ) 

106 

107 engine = create_engine(connection_url) 

108 

109.. _mssql_pyodbc_access_tokens: 

110 

111Connecting to databases with access tokens 

112^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

113 

114Some database servers are set up to only accept access tokens for login. For 

115example, SQL Server allows the use of Azure Active Directory tokens to connect 

116to databases. This requires creating a credential object using the 

117``azure-identity`` library. More information about the authentication step can be 

118found in `Microsoft's documentation 

119<https://docs.microsoft.com/en-us/azure/developer/python/azure-sdk-authenticate?tabs=bash>`_. 

120 

121After getting an engine, the credentials need to be sent to ``pyodbc.connect`` 

122each time a connection is requested. One way to do this is to set up an event 

123listener on the engine that adds the credential token to the dialect's connect 

124call. This is discussed more generally in :ref:`engines_dynamic_tokens`. For 

125SQL Server in particular, this is passed as an ODBC connection attribute with 

126a data structure `described by Microsoft 

127<https://docs.microsoft.com/en-us/sql/connect/odbc/using-azure-active-directory#authenticating-with-an-access-token>`_. 

128 

129The following code snippet will create an engine that connects to an Azure SQL 

130database using Azure credentials:: 

131 

132 import struct 

133 from sqlalchemy import create_engine, event 

134 from sqlalchemy.engine.url import URL 

135 from azure import identity 

136 

137 # Connection option for access tokens, as defined in msodbcsql.h 

138 SQL_COPT_SS_ACCESS_TOKEN = 1256 

139 TOKEN_URL = "https://database.windows.net/" # The token URL for any Azure SQL database 

140 

141 connection_string = "mssql+pyodbc://@my-server.database.windows.net/myDb?driver=ODBC+Driver+17+for+SQL+Server" 

142 

143 engine = create_engine(connection_string) 

144 

145 azure_credentials = identity.DefaultAzureCredential() 

146 

147 

148 @event.listens_for(engine, "do_connect") 

149 def provide_token(dialect, conn_rec, cargs, cparams): 

150 # remove the "Trusted_Connection" parameter that SQLAlchemy adds 

151 cargs[0] = cargs[0].replace(";Trusted_Connection=Yes", "") 

152 

153 # create token credential 

154 raw_token = azure_credentials.get_token(TOKEN_URL).token.encode( 

155 "utf-16-le" 

156 ) 

157 token_struct = struct.pack( 

158 f"<I{len(raw_token)}s", len(raw_token), raw_token 

159 ) 

160 

161 # apply it to keyword arguments 

162 cparams["attrs_before"] = {SQL_COPT_SS_ACCESS_TOKEN: token_struct} 

163 

164.. tip:: 

165 

166 The ``Trusted_Connection`` token is currently added by the SQLAlchemy 

167 pyodbc dialect when no username or password is present. This needs 

168 to be removed per Microsoft's 

169 `documentation for Azure access tokens 

170 <https://docs.microsoft.com/en-us/sql/connect/odbc/using-azure-active-directory#authenticating-with-an-access-token>`_, 

171 stating that a connection string when using an access token must not contain 

172 ``UID``, ``PWD``, ``Authentication`` or ``Trusted_Connection`` parameters. 

173 

174.. _azure_synapse_ignore_no_transaction_on_rollback: 

175 

176Avoiding transaction-related exceptions on Azure Synapse Analytics 

177^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

178 

179Azure Synapse Analytics has a significant difference in its transaction 

180handling compared to plain SQL Server; in some cases an error within a Synapse 

181transaction can cause it to be arbitrarily terminated on the server side, which 

182then causes the DBAPI ``.rollback()`` method (as well as ``.commit()``) to 

183fail. The issue prevents the usual DBAPI contract of allowing ``.rollback()`` 

184to pass silently if no transaction is present as the driver does not expect 

185this condition. The symptom of this failure is an exception with a message 

186resembling 'No corresponding transaction found. (111214)' when attempting to 

187emit a ``.rollback()`` after an operation had a failure of some kind. 

188 

189This specific case can be handled by passing ``ignore_no_transaction_on_rollback=True`` to 

190the SQL Server dialect via the :func:`_sa.create_engine` function as follows:: 

191 

192 engine = create_engine( 

193 connection_url, ignore_no_transaction_on_rollback=True 

194 ) 

195 

196Using the above parameter, the dialect will catch ``ProgrammingError`` 

197exceptions raised during ``connection.rollback()`` and emit a warning 

198if the error message contains code ``111214``, however will not raise 

199an exception. 

200 

201.. versionadded:: 1.4.40 Added the 

202 ``ignore_no_transaction_on_rollback=True`` parameter. 

203 

204Enable autocommit for Azure SQL Data Warehouse (DW) connections 

205^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

206 

207Azure SQL Data Warehouse does not support transactions, 

208and that can cause problems with SQLAlchemy's "autobegin" (and implicit 

209commit/rollback) behavior. We can avoid these problems by enabling autocommit 

210at both the pyodbc and engine levels:: 

211 

212 connection_url = sa.engine.URL.create( 

213 "mssql+pyodbc", 

214 username="scott", 

215 password="tiger", 

216 host="dw.azure.example.com", 

217 database="mydb", 

218 query={ 

219 "driver": "ODBC Driver 17 for SQL Server", 

220 "autocommit": "True", 

221 }, 

222 ) 

223 

224 engine = create_engine(connection_url).execution_options( 

225 isolation_level="AUTOCOMMIT" 

226 ) 

227 

228Avoiding sending large string parameters as TEXT/NTEXT 

229^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 

230 

231By default, for historical reasons, Microsoft's ODBC drivers for SQL Server 

232send long string parameters (greater than 4000 SBCS characters or 2000 Unicode 

233characters) as TEXT/NTEXT values. TEXT and NTEXT have been deprecated for many 

234years and are starting to cause compatibility issues with newer versions of 

235SQL_Server/Azure. For example, see `this 

236issue <https://github.com/mkleehammer/pyodbc/issues/835>`_. 

237 

238Starting with ODBC Driver 18 for SQL Server we can override the legacy 

239behavior and pass long strings as varchar(max)/nvarchar(max) using the 

240``LongAsMax=Yes`` connection string parameter:: 

241 

242 connection_url = sa.engine.URL.create( 

243 "mssql+pyodbc", 

244 username="scott", 

245 password="tiger", 

246 host="mssqlserver.example.com", 

247 database="mydb", 

248 query={ 

249 "driver": "ODBC Driver 18 for SQL Server", 

250 "LongAsMax": "Yes", 

251 }, 

252 ) 

253 

254Pyodbc Pooling / connection close behavior 

255------------------------------------------ 

256 

257PyODBC uses internal `pooling 

258<https://github.com/mkleehammer/pyodbc/wiki/The-pyodbc-Module#pooling>`_ by 

259default, which means connections will be longer lived than they are within 

260SQLAlchemy itself. As SQLAlchemy has its own pooling behavior, it is often 

261preferable to disable this behavior. This behavior can only be disabled 

262globally at the PyODBC module level, **before** any connections are made:: 

263 

264 import pyodbc 

265 

266 pyodbc.pooling = False 

267 

268 # don't use the engine before pooling is set to False 

269 engine = create_engine("mssql+pyodbc://user:pass@dsn") 

270 

271If this variable is left at its default value of ``True``, **the application 

272will continue to maintain active database connections**, even when the 

273SQLAlchemy engine itself fully discards a connection or if the engine is 

274disposed. 

275 

276.. seealso:: 

277 

278 `pooling <https://github.com/mkleehammer/pyodbc/wiki/The-pyodbc-Module#pooling>`_ - 

279 in the PyODBC documentation. 

280 

281Driver / Unicode Support 

282------------------------- 

283 

284PyODBC works best with Microsoft ODBC drivers, particularly in the area 

285of Unicode support on both Python 2 and Python 3. 

286 

287Using the FreeTDS ODBC drivers on Linux or OSX with PyODBC is **not** 

288recommended; there have been historically many Unicode-related issues 

289in this area, including before Microsoft offered ODBC drivers for Linux 

290and OSX. Now that Microsoft offers drivers for all platforms, for 

291PyODBC support these are recommended. FreeTDS remains relevant for 

292non-ODBC drivers such as pymssql where it works very well. 

293 

294 

295Rowcount Support 

296---------------- 

297 

298Previous limitations with the SQLAlchemy ORM's "versioned rows" feature with 

299Pyodbc have been resolved as of SQLAlchemy 2.0.5. See the notes at 

300:ref:`mssql_rowcount_versioning`. 

301 

302.. _mssql_pyodbc_fastexecutemany: 

303 

304Fast Executemany Mode 

305--------------------- 

306 

307The PyODBC driver includes support for a "fast executemany" mode of execution 

308which greatly reduces round trips for a DBAPI ``executemany()`` call when using 

309Microsoft ODBC drivers, for **limited size batches that fit in memory**. The 

310feature is enabled by setting the attribute ``.fast_executemany`` on the DBAPI 

311cursor when an executemany call is to be used. The SQLAlchemy PyODBC SQL 

312Server dialect supports this parameter by passing the 

313``fast_executemany`` parameter to 

314:func:`_sa.create_engine` , when using the **Microsoft ODBC driver only**:: 

315 

316 engine = create_engine( 

317 "mssql+pyodbc://scott:tiger@mssql2017:1433/test?driver=ODBC+Driver+17+for+SQL+Server", 

318 fast_executemany=True, 

319 ) 

320 

321.. versionchanged:: 2.0.9 - the ``fast_executemany`` parameter now has its 

322 intended effect of this PyODBC feature taking effect for all INSERT 

323 statements that are executed with multiple parameter sets, which don't 

324 include RETURNING. Previously, SQLAlchemy 2.0's :term:`insertmanyvalues` 

325 feature would cause ``fast_executemany`` to not be used in most cases 

326 even if specified. 

327 

328.. versionadded:: 1.3 

329 

330.. seealso:: 

331 

332 `fast executemany <https://github.com/mkleehammer/pyodbc/wiki/Features-beyond-the-DB-API#fast_executemany>`_ 

333 - on github 

334 

335.. _mssql_pyodbc_setinputsizes: 

336 

337Setinputsizes Support 

338----------------------- 

339 

340As of version 2.0, the pyodbc ``cursor.setinputsizes()`` method is used for 

341all statement executions, except for ``cursor.executemany()`` calls when 

342fast_executemany=True where it is not supported (assuming 

343:ref:`insertmanyvalues <engine_insertmanyvalues>` is kept enabled, 

344"fastexecutemany" will not take place for INSERT statements in any case). 

345 

346The use of ``cursor.setinputsizes()`` can be disabled by passing 

347``use_setinputsizes=False`` to :func:`_sa.create_engine`. 

348 

349When ``use_setinputsizes`` is left at its default of ``True``, the 

350specific per-type symbols passed to ``cursor.setinputsizes()`` can be 

351programmatically customized using the :meth:`.DialectEvents.do_setinputsizes` 

352hook. See that method for usage examples. 

353 

354.. versionchanged:: 2.0 The mssql+pyodbc dialect now defaults to using 

355 ``use_setinputsizes=True`` for all statement executions with the exception of 

356 cursor.executemany() calls when fast_executemany=True. The behavior can 

357 be turned off by passing ``use_setinputsizes=False`` to 

358 :func:`_sa.create_engine`. 

359 

360""" # noqa 

361 

362 

363import datetime 

364import decimal 

365import re 

366import struct 

367 

368from .base import _MSDateTime 

369from .base import _MSUnicode 

370from .base import _MSUnicodeText 

371from .base import BINARY 

372from .base import DATETIMEOFFSET 

373from .base import MSDialect 

374from .base import MSExecutionContext 

375from .base import VARBINARY 

376from .json import JSON as _MSJson 

377from .json import JSONIndexType as _MSJsonIndexType 

378from .json import JSONPathType as _MSJsonPathType 

379from ... import exc 

380from ... import types as sqltypes 

381from ... import util 

382from ...connectors.pyodbc import PyODBCConnector 

383from ...engine import cursor as _cursor 

384 

385 

386class _ms_numeric_pyodbc: 

387 """Turns Decimals with adjusted() < 0 or > 7 into strings. 

388 

389 The routines here are needed for older pyodbc versions 

390 as well as current mxODBC versions. 

391 

392 """ 

393 

394 def bind_processor(self, dialect): 

395 super_process = super().bind_processor(dialect) 

396 

397 if not dialect._need_decimal_fix: 

398 return super_process 

399 

400 def process(value): 

401 if self.asdecimal and isinstance(value, decimal.Decimal): 

402 adjusted = value.adjusted() 

403 if adjusted < 0: 

404 return self._small_dec_to_string(value) 

405 elif adjusted > 7: 

406 return self._large_dec_to_string(value) 

407 

408 if super_process: 

409 return super_process(value) 

410 else: 

411 return value 

412 

413 return process 

414 

415 # these routines needed for older versions of pyodbc. 

416 # as of 2.1.8 this logic is integrated. 

417 

418 def _small_dec_to_string(self, value): 

419 return "%s0.%s%s" % ( 

420 (value < 0 and "-" or ""), 

421 "0" * (abs(value.adjusted()) - 1), 

422 "".join([str(nint) for nint in value.as_tuple()[1]]), 

423 ) 

424 

425 def _large_dec_to_string(self, value): 

426 _int = value.as_tuple()[1] 

427 if "E" in str(value): 

428 result = "%s%s%s" % ( 

429 (value < 0 and "-" or ""), 

430 "".join([str(s) for s in _int]), 

431 "0" * (value.adjusted() - (len(_int) - 1)), 

432 ) 

433 else: 

434 if (len(_int) - 1) > value.adjusted(): 

435 result = "%s%s.%s" % ( 

436 (value < 0 and "-" or ""), 

437 "".join([str(s) for s in _int][0 : value.adjusted() + 1]), 

438 "".join([str(s) for s in _int][value.adjusted() + 1 :]), 

439 ) 

440 else: 

441 result = "%s%s" % ( 

442 (value < 0 and "-" or ""), 

443 "".join([str(s) for s in _int][0 : value.adjusted() + 1]), 

444 ) 

445 return result 

446 

447 

448class _MSNumeric_pyodbc(_ms_numeric_pyodbc, sqltypes.Numeric): 

449 pass 

450 

451 

452class _MSFloat_pyodbc(_ms_numeric_pyodbc, sqltypes.Float): 

453 pass 

454 

455 

456class _ms_binary_pyodbc: 

457 """Wraps binary values in dialect-specific Binary wrapper. 

458 If the value is null, return a pyodbc-specific BinaryNull 

459 object to prevent pyODBC [and FreeTDS] from defaulting binary 

460 NULL types to SQLWCHAR and causing implicit conversion errors. 

461 """ 

462 

463 def bind_processor(self, dialect): 

464 if dialect.dbapi is None: 

465 return None 

466 

467 DBAPIBinary = dialect.dbapi.Binary 

468 

469 def process(value): 

470 if value is not None: 

471 return DBAPIBinary(value) 

472 else: 

473 # pyodbc-specific 

474 return dialect.dbapi.BinaryNull 

475 

476 return process 

477 

478 

479class _ODBCDateTimeBindProcessor: 

480 """Add bind processors to handle datetimeoffset behaviors""" 

481 

482 has_tz = False 

483 

484 def bind_processor(self, dialect): 

485 def process(value): 

486 if value is None: 

487 return None 

488 elif isinstance(value, str): 

489 # if a string was passed directly, allow it through 

490 return value 

491 elif not value.tzinfo or (not self.timezone and not self.has_tz): 

492 # for DateTime(timezone=False) 

493 return value 

494 else: 

495 # for DATETIMEOFFSET or DateTime(timezone=True) 

496 # 

497 # Convert to string format required by T-SQL 

498 dto_string = value.strftime("%Y-%m-%d %H:%M:%S.%f %z") 

499 # offset needs a colon, e.g., -0700 -> -07:00 

500 # "UTC offset in the form (+-)HHMM[SS[.ffffff]]" 

501 # backend currently rejects seconds / fractional seconds 

502 dto_string = re.sub( 

503 r"([\+\-]\d{2})([\d\.]+)$", r"\1:\2", dto_string 

504 ) 

505 return dto_string 

506 

507 return process 

508 

509 

510class _ODBCDateTime(_ODBCDateTimeBindProcessor, _MSDateTime): 

511 pass 

512 

513 

514class _ODBCDATETIMEOFFSET(_ODBCDateTimeBindProcessor, DATETIMEOFFSET): 

515 has_tz = True 

516 

517 

518class _VARBINARY_pyodbc(_ms_binary_pyodbc, VARBINARY): 

519 pass 

520 

521 

522class _BINARY_pyodbc(_ms_binary_pyodbc, BINARY): 

523 pass 

524 

525 

526class _String_pyodbc(sqltypes.String): 

527 def get_dbapi_type(self, dbapi): 

528 if self.length in (None, "max") or self.length >= 2000: 

529 return (dbapi.SQL_VARCHAR, 0, 0) 

530 else: 

531 return dbapi.SQL_VARCHAR 

532 

533 

534class _Unicode_pyodbc(_MSUnicode): 

535 def get_dbapi_type(self, dbapi): 

536 if self.length in (None, "max") or self.length >= 2000: 

537 return (dbapi.SQL_WVARCHAR, 0, 0) 

538 else: 

539 return dbapi.SQL_WVARCHAR 

540 

541 

542class _UnicodeText_pyodbc(_MSUnicodeText): 

543 def get_dbapi_type(self, dbapi): 

544 if self.length in (None, "max") or self.length >= 2000: 

545 return (dbapi.SQL_WVARCHAR, 0, 0) 

546 else: 

547 return dbapi.SQL_WVARCHAR 

548 

549 

550class _JSON_pyodbc(_MSJson): 

551 def get_dbapi_type(self, dbapi): 

552 return (dbapi.SQL_WVARCHAR, 0, 0) 

553 

554 

555class _JSONIndexType_pyodbc(_MSJsonIndexType): 

556 def get_dbapi_type(self, dbapi): 

557 return dbapi.SQL_WVARCHAR 

558 

559 

560class _JSONPathType_pyodbc(_MSJsonPathType): 

561 def get_dbapi_type(self, dbapi): 

562 return dbapi.SQL_WVARCHAR 

563 

564 

565class MSExecutionContext_pyodbc(MSExecutionContext): 

566 _embedded_scope_identity = False 

567 

568 def pre_exec(self): 

569 """where appropriate, issue "select scope_identity()" in the same 

570 statement. 

571 

572 Background on why "scope_identity()" is preferable to "@@identity": 

573 https://msdn.microsoft.com/en-us/library/ms190315.aspx 

574 

575 Background on why we attempt to embed "scope_identity()" into the same 

576 statement as the INSERT: 

577 https://code.google.com/p/pyodbc/wiki/FAQs#How_do_I_retrieve_autogenerated/identity_values? 

578 

579 """ 

580 

581 super().pre_exec() 

582 

583 # don't embed the scope_identity select into an 

584 # "INSERT .. DEFAULT VALUES" 

585 if ( 

586 self._select_lastrowid 

587 and self.dialect.use_scope_identity 

588 and len(self.parameters[0]) 

589 ): 

590 self._embedded_scope_identity = True 

591 

592 self.statement += "; select scope_identity()" 

593 

594 def post_exec(self): 

595 if self._embedded_scope_identity: 

596 # Fetch the last inserted id from the manipulated statement 

597 # We may have to skip over a number of result sets with 

598 # no data (due to triggers, etc.) 

599 while True: 

600 try: 

601 # fetchall() ensures the cursor is consumed 

602 # without closing it (FreeTDS particularly) 

603 rows = self.cursor.fetchall() 

604 except self.dialect.dbapi.Error: 

605 # no way around this - nextset() consumes the previous set 

606 # so we need to just keep flipping 

607 self.cursor.nextset() 

608 else: 

609 if not rows: 

610 # async adapter drivers just return None here 

611 self.cursor.nextset() 

612 continue 

613 row = rows[0] 

614 break 

615 

616 self._lastrowid = int(row[0]) 

617 

618 self.cursor_fetch_strategy = _cursor._NO_CURSOR_DML 

619 else: 

620 super().post_exec() 

621 

622 

623class MSDialect_pyodbc(PyODBCConnector, MSDialect): 

624 supports_statement_cache = True 

625 

626 # note this parameter is no longer used by the ORM or default dialect 

627 # see #9414 

628 supports_sane_rowcount_returning = False 

629 

630 execution_ctx_cls = MSExecutionContext_pyodbc 

631 

632 colspecs = util.update_copy( 

633 MSDialect.colspecs, 

634 { 

635 sqltypes.Numeric: _MSNumeric_pyodbc, 

636 sqltypes.Float: _MSFloat_pyodbc, 

637 BINARY: _BINARY_pyodbc, 

638 # support DateTime(timezone=True) 

639 sqltypes.DateTime: _ODBCDateTime, 

640 DATETIMEOFFSET: _ODBCDATETIMEOFFSET, 

641 # SQL Server dialect has a VARBINARY that is just to support 

642 # "deprecate_large_types" w/ VARBINARY(max), but also we must 

643 # handle the usual SQL standard VARBINARY 

644 VARBINARY: _VARBINARY_pyodbc, 

645 sqltypes.VARBINARY: _VARBINARY_pyodbc, 

646 sqltypes.LargeBinary: _VARBINARY_pyodbc, 

647 sqltypes.String: _String_pyodbc, 

648 sqltypes.Unicode: _Unicode_pyodbc, 

649 sqltypes.UnicodeText: _UnicodeText_pyodbc, 

650 sqltypes.JSON: _JSON_pyodbc, 

651 sqltypes.JSON.JSONIndexType: _JSONIndexType_pyodbc, 

652 sqltypes.JSON.JSONPathType: _JSONPathType_pyodbc, 

653 # this excludes Enum from the string/VARCHAR thing for now 

654 # it looks like Enum's adaptation doesn't really support the 

655 # String type itself having a dialect-level impl 

656 sqltypes.Enum: sqltypes.Enum, 

657 }, 

658 ) 

659 

660 def __init__( 

661 self, 

662 fast_executemany=False, 

663 use_setinputsizes=True, 

664 **params, 

665 ): 

666 super().__init__(use_setinputsizes=use_setinputsizes, **params) 

667 self.use_scope_identity = ( 

668 self.use_scope_identity 

669 and self.dbapi 

670 and hasattr(self.dbapi.Cursor, "nextset") 

671 ) 

672 self._need_decimal_fix = self.dbapi and self._dbapi_version() < ( 

673 2, 

674 1, 

675 8, 

676 ) 

677 self.fast_executemany = fast_executemany 

678 if fast_executemany: 

679 self.use_insertmanyvalues_wo_returning = False 

680 

681 def _get_server_version_info(self, connection): 

682 try: 

683 # "Version of the instance of SQL Server, in the form 

684 # of 'major.minor.build.revision'" 

685 raw = connection.exec_driver_sql( 

686 "SELECT CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR)" 

687 ).scalar() 

688 except exc.DBAPIError: 

689 # SQL Server docs indicate this function isn't present prior to 

690 # 2008. Before we had the VARCHAR cast above, pyodbc would also 

691 # fail on this query. 

692 return super()._get_server_version_info(connection) 

693 else: 

694 version = [] 

695 r = re.compile(r"[.\-]") 

696 for n in r.split(raw): 

697 try: 

698 version.append(int(n)) 

699 except ValueError: 

700 pass 

701 return tuple(version) 

702 

703 def on_connect(self): 

704 super_ = super().on_connect() 

705 

706 def on_connect(conn): 

707 if super_ is not None: 

708 super_(conn) 

709 

710 self._setup_timestampoffset_type(conn) 

711 

712 return on_connect 

713 

714 def _setup_timestampoffset_type(self, connection): 

715 # output converter function for datetimeoffset 

716 def _handle_datetimeoffset(dto_value): 

717 tup = struct.unpack("<6hI2h", dto_value) 

718 return datetime.datetime( 

719 tup[0], 

720 tup[1], 

721 tup[2], 

722 tup[3], 

723 tup[4], 

724 tup[5], 

725 tup[6] // 1000, 

726 datetime.timezone( 

727 datetime.timedelta(hours=tup[7], minutes=tup[8]) 

728 ), 

729 ) 

730 

731 odbc_SQL_SS_TIMESTAMPOFFSET = -155 # as defined in SQLNCLI.h 

732 connection.add_output_converter( 

733 odbc_SQL_SS_TIMESTAMPOFFSET, _handle_datetimeoffset 

734 ) 

735 

736 def do_executemany(self, cursor, statement, parameters, context=None): 

737 if self.fast_executemany: 

738 cursor.fast_executemany = True 

739 super().do_executemany(cursor, statement, parameters, context=context) 

740 

741 def is_disconnect(self, e, connection, cursor): 

742 if isinstance(e, self.dbapi.Error): 

743 code = e.args[0] 

744 if code in { 

745 "08S01", 

746 "01000", 

747 "01002", 

748 "08003", 

749 "08007", 

750 "08S02", 

751 "08001", 

752 "HYT00", 

753 "HY010", 

754 "10054", 

755 }: 

756 return True 

757 return super().is_disconnect(e, connection, cursor) 

758 

759 

760dialect = MSDialect_pyodbc