Coverage for /pythoncovmergedfiles/medio/medio/usr/local/lib/python3.8/site-packages/imageio-2.35.1-py3.8.egg/imageio/plugins/_tifffile.py: 11%

Shortcuts on this page

r m x   toggle line displays

j k   next/prev highlighted chunk

0   (zero) top of page

1   (one) first highlighted chunk

5186 statements  

1#! /usr/bin/env python3 

2# -*- coding: utf-8 -*- 

3# tifffile.py 

4 

5# Copyright (c) 2008-2018, Christoph Gohlke 

6# Copyright (c) 2008-2018, The Regents of the University of California 

7# Produced at the Laboratory for Fluorescence Dynamics 

8# All rights reserved. 

9# 

10# Redistribution and use in source and binary forms, with or without 

11# modification, are permitted provided that the following conditions are met: 

12# 

13# * Redistributions of source code must retain the above copyright 

14# notice, this list of conditions and the following disclaimer. 

15# * Redistributions in binary form must reproduce the above copyright 

16# notice, this list of conditions and the following disclaimer in the 

17# documentation and/or other materials provided with the distribution. 

18# * Neither the name of the copyright holders nor the names of any 

19# contributors may be used to endorse or promote products derived 

20# from this software without specific prior written permission. 

21# 

22# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 

23# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 

24# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE 

25# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE 

26# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 

27# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 

28# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 

29# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 

30# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 

31# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 

32# POSSIBILITY OF SUCH DAMAGE. 

33 

34"""Read image and meta data from (bio) TIFF(R) files. Save numpy arrays as TIFF. 

35 

36Image and metadata can be read from TIFF, BigTIFF, OME-TIFF, STK, LSM, NIH, 

37SGI, ImageJ, MicroManager, FluoView, ScanImage, SEQ, GEL, and GeoTIFF files. 

38 

39Tifffile is not a general-purpose TIFF library. 

40Only a subset of the TIFF specification is supported, mainly uncompressed and 

41losslessly compressed 1, 8, 16, 32 and 64 bit integer, 16, 32 and 64-bit float, 

42grayscale and RGB(A) images, which are commonly used in scientific imaging. 

43Specifically, reading slices of image data, image trees defined via SubIFDs, 

44CCITT and OJPEG compression, chroma subsampling without JPEG compression, 

45or IPTC and XMP metadata are not implemented. 

46 

47TIFF(R), the tagged Image File Format, is a trademark and under control of 

48Adobe Systems Incorporated. BigTIFF allows for files greater than 4 GB. 

49STK, LSM, FluoView, SGI, SEQ, GEL, and OME-TIFF, are custom extensions 

50defined by Molecular Devices (Universal Imaging Corporation), Carl Zeiss 

51MicroImaging, Olympus, Silicon Graphics International, Media Cybernetics, 

52Molecular Dynamics, and the Open Microscopy Environment consortium 

53respectively. 

54 

55For command line usage run C{python -m tifffile --help} 

56 

57:Author: 

58 `Christoph Gohlke <https://www.lfd.uci.edu/~gohlke/>`_ 

59 

60:Organization: 

61 Laboratory for Fluorescence Dynamics, University of California, Irvine 

62 

63:Version: 2018.06.15 

64 

65Requirements 

66------------ 

67* `CPython 3.6 64-bit <https://www.python.org>`_ 

68* `Numpy 1.14 <http://www.numpy.org>`_ 

69* `Matplotlib 2.2 <https://www.matplotlib.org>`_ (optional for plotting) 

70* `Tifffile.c 2018.02.10 <https://www.lfd.uci.edu/~gohlke/>`_ 

71 (recommended for faster decoding of PackBits and LZW encoded strings) 

72* `Tifffile_geodb.py 2018.02.10 <https://www.lfd.uci.edu/~gohlke/>`_ 

73 (optional enums for GeoTIFF metadata) 

74* Python 2 requires 'futures', 'enum34', 'pathlib'. 

75 

76Revisions 

77--------- 

782018.06.15 

79 Pass 2680 tests. 

80 Towards reading JPEG and other compressions via imagecodecs package (WIP). 

81 Add function to validate TIFF using 'jhove -m TIFF-hul'. 

82 Save bool arrays as bilevel TIFF. 

83 Accept pathlib.Path as filenames. 

84 Move 'software' argument from TiffWriter __init__ to save. 

85 Raise DOS limit to 16 TB. 

86 Lazy load lzma and zstd compressors and decompressors. 

87 Add option to save IJMetadata tags. 

88 Return correct number of pages for truncated series (bug fix). 

89 Move EXIF tags to TIFF.TAG as per TIFF/EP standard. 

902018.02.18 

91 Pass 2293 tests. 

92 Always save RowsPerStrip and Resolution tags as required by TIFF standard. 

93 Do not use badly typed ImageDescription. 

94 Coherce bad ASCII string tags to bytes. 

95 Tuning of __str__ functions. 

96 Fix reading 'undefined' tag values (bug fix). 

97 Read and write ZSTD compressed data. 

98 Use hexdump to print byte strings. 

99 Determine TIFF byte order from data dtype in imsave. 

100 Add option to specify RowsPerStrip for compressed strips. 

101 Allow memory map of arrays with non-native byte order. 

102 Attempt to handle ScanImage <= 5.1 files. 

103 Restore TiffPageSeries.pages sequence interface. 

104 Use numpy.frombuffer instead of fromstring to read from binary data. 

105 Parse GeoTIFF metadata. 

106 Add option to apply horizontal differencing before compression. 

107 Towards reading PerkinElmer QPTIFF (no test files). 

108 Do not index out of bounds data in tifffile.c unpackbits and decodelzw. 

1092017.09.29 (tentative) 

110 Many backwards incompatible changes improving speed and resource usage: 

111 Pass 2268 tests. 

112 Add detail argument to __str__ function. Remove info functions. 

113 Fix potential issue correcting offsets of large LSM files with positions. 

114 Remove TiffFile sequence interface; use TiffFile.pages instead. 

115 Do not make tag values available as TiffPage attributes. 

116 Use str (not bytes) type for tag and metadata strings (WIP). 

117 Use documented standard tag and value names (WIP). 

118 Use enums for some documented TIFF tag values. 

119 Remove 'memmap' and 'tmpfile' options; use out='memmap' instead. 

120 Add option to specify output in asarray functions. 

121 Add option to concurrently decode image strips or tiles using threads. 

122 Add TiffPage.asrgb function (WIP). 

123 Do not apply colormap in asarray. 

124 Remove 'colormapped', 'rgbonly', and 'scale_mdgel' options from asarray. 

125 Consolidate metadata in TiffFile _metadata functions. 

126 Remove non-tag metadata properties from TiffPage. 

127 Add function to convert LSM to tiled BIN files. 

128 Align image data in file. 

129 Make TiffPage.dtype a numpy.dtype. 

130 Add 'ndim' and 'size' properties to TiffPage and TiffPageSeries. 

131 Allow imsave to write non-BigTIFF files up to ~4 GB. 

132 Only read one page for shaped series if possible. 

133 Add memmap function to create memory-mapped array stored in TIFF file. 

134 Add option to save empty arrays to TIFF files. 

135 Add option to save truncated TIFF files. 

136 Allow single tile images to be saved contiguously. 

137 Add optional movie mode for files with uniform pages. 

138 Lazy load pages. 

139 Use lightweight TiffFrame for IFDs sharing properties with key TiffPage. 

140 Move module constants to 'TIFF' namespace (speed up module import). 

141 Remove 'fastij' option from TiffFile. 

142 Remove 'pages' parameter from TiffFile. 

143 Remove TIFFfile alias. 

144 Deprecate Python 2. 

145 Require enum34 and futures packages on Python 2.7. 

146 Remove Record class and return all metadata as dict instead. 

147 Add functions to parse STK, MetaSeries, ScanImage, SVS, Pilatus metadata. 

148 Read tags from EXIF and GPS IFDs. 

149 Use pformat for tag and metadata values. 

150 Fix reading some UIC tags (bug fix). 

151 Do not modify input array in imshow (bug fix). 

152 Fix Python implementation of unpack_ints. 

1532017.05.23 

154 Pass 1961 tests. 

155 Write correct number of SampleFormat values (bug fix). 

156 Use Adobe deflate code to write ZIP compressed files. 

157 Add option to pass tag values as packed binary data for writing. 

158 Defer tag validation to attribute access. 

159 Use property instead of lazyattr decorator for simple expressions. 

1602017.03.17 

161 Write IFDs and tag values on word boundaries. 

162 Read ScanImage metadata. 

163 Remove is_rgb and is_indexed attributes from TiffFile. 

164 Create files used by doctests. 

1652017.01.12 

166 Read Zeiss SEM metadata. 

167 Read OME-TIFF with invalid references to external files. 

168 Rewrite C LZW decoder (5x faster). 

169 Read corrupted LSM files missing EOI code in LZW stream. 

1702017.01.01 

171 Add option to append images to existing TIFF files. 

172 Read files without pages. 

173 Read S-FEG and Helios NanoLab tags created by FEI software. 

174 Allow saving Color Filter Array (CFA) images. 

175 Add info functions returning more information about TiffFile and TiffPage. 

176 Add option to read specific pages only. 

177 Remove maxpages argument (backwards incompatible). 

178 Remove test_tifffile function. 

1792016.10.28 

180 Pass 1944 tests. 

181 Improve detection of ImageJ hyperstacks. 

182 Read TVIPS metadata created by EM-MENU (by Marco Oster). 

183 Add option to disable using OME-XML metadata. 

184 Allow non-integer range attributes in modulo tags (by Stuart Berg). 

1852016.06.21 

186 Do not always memmap contiguous data in page series. 

1872016.05.13 

188 Add option to specify resolution unit. 

189 Write grayscale images with extra samples when planarconfig is specified. 

190 Do not write RGB color images with 2 samples. 

191 Reorder TiffWriter.save keyword arguments (backwards incompatible). 

1922016.04.18 

193 Pass 1932 tests. 

194 TiffWriter, imread, and imsave accept open binary file streams. 

1952016.04.13 

196 Correctly handle reversed fill order in 2 and 4 bps images (bug fix). 

197 Implement reverse_bitorder in C. 

1982016.03.18 

199 Fix saving additional ImageJ metadata. 

2002016.02.22 

201 Pass 1920 tests. 

202 Write 8 bytes double tag values using offset if necessary (bug fix). 

203 Add option to disable writing second image description tag. 

204 Detect tags with incorrect counts. 

205 Disable color mapping for LSM. 

2062015.11.13 

207 Read LSM 6 mosaics. 

208 Add option to specify directory of memory-mapped files. 

209 Add command line options to specify vmin and vmax values for colormapping. 

2102015.10.06 

211 New helper function to apply colormaps. 

212 Renamed is_palette attributes to is_indexed (backwards incompatible). 

213 Color-mapped samples are now contiguous (backwards incompatible). 

214 Do not color-map ImageJ hyperstacks (backwards incompatible). 

215 Towards reading Leica SCN. 

2162015.09.25 

217 Read images with reversed bit order (FillOrder is LSB2MSB). 

2182015.09.21 

219 Read RGB OME-TIFF. 

220 Warn about malformed OME-XML. 

2212015.09.16 

222 Detect some corrupted ImageJ metadata. 

223 Better axes labels for 'shaped' files. 

224 Do not create TiffTag for default values. 

225 Chroma subsampling is not supported. 

226 Memory-map data in TiffPageSeries if possible (optional). 

2272015.08.17 

228 Pass 1906 tests. 

229 Write ImageJ hyperstacks (optional). 

230 Read and write LZMA compressed data. 

231 Specify datetime when saving (optional). 

232 Save tiled and color-mapped images (optional). 

233 Ignore void bytecounts and offsets if possible. 

234 Ignore bogus image_depth tag created by ISS Vista software. 

235 Decode floating point horizontal differencing (not tiled). 

236 Save image data contiguously if possible. 

237 Only read first IFD from ImageJ files if possible. 

238 Read ImageJ 'raw' format (files larger than 4 GB). 

239 TiffPageSeries class for pages with compatible shape and data type. 

240 Try to read incomplete tiles. 

241 Open file dialog if no filename is passed on command line. 

242 Ignore errors when decoding OME-XML. 

243 Rename decoder functions (backwards incompatible). 

2442014.08.24 

245 TiffWriter class for incremental writing images. 

246 Simplify examples. 

2472014.08.19 

248 Add memmap function to FileHandle. 

249 Add function to determine if image data in TiffPage is memory-mappable. 

250 Do not close files if multifile_close parameter is False. 

2512014.08.10 

252 Pass 1730 tests. 

253 Return all extrasamples by default (backwards incompatible). 

254 Read data from series of pages into memory-mapped array (optional). 

255 Squeeze OME dimensions (backwards incompatible). 

256 Workaround missing EOI code in strips. 

257 Support image and tile depth tags (SGI extension). 

258 Better handling of STK/UIC tags (backwards incompatible). 

259 Disable color mapping for STK. 

260 Julian to datetime converter. 

261 TIFF ASCII type may be NULL separated. 

262 Unwrap strip offsets for LSM files greater than 4 GB. 

263 Correct strip byte counts in compressed LSM files. 

264 Skip missing files in OME series. 

265 Read embedded TIFF files. 

2662014.02.05 

267 Save rational numbers as type 5 (bug fix). 

2682013.12.20 

269 Keep other files in OME multi-file series closed. 

270 FileHandle class to abstract binary file handle. 

271 Disable color mapping for bad OME-TIFF produced by bio-formats. 

272 Read bad OME-XML produced by ImageJ when cropping. 

2732013.11.03 

274 Allow zlib compress data in imsave function (optional). 

275 Memory-map contiguous image data (optional). 

2762013.10.28 

277 Read MicroManager metadata and little-endian ImageJ tag. 

278 Save extra tags in imsave function. 

279 Save tags in ascending order by code (bug fix). 

2802012.10.18 

281 Accept file like objects (read from OIB files). 

2822012.08.21 

283 Rename TIFFfile to TiffFile and TIFFpage to TiffPage. 

284 TiffSequence class for reading sequence of TIFF files. 

285 Read UltraQuant tags. 

286 Allow float numbers as resolution in imsave function. 

2872012.08.03 

288 Read MD GEL tags and NIH Image header. 

2892012.07.25 

290 Read ImageJ tags. 

291 ... 

292 

293Notes 

294----- 

295The API is not stable yet and might change between revisions. 

296 

297Tested on little-endian platforms only. 

298 

299Other Python packages and modules for reading (bio) scientific TIFF files: 

300 

301* `python-bioformats <https://github.com/CellProfiler/python-bioformats>`_ 

302* `Imread <https://github.com/luispedro/imread>`_ 

303* `PyLibTiff <https://github.com/pearu/pylibtiff>`_ 

304* `ITK <https://www.itk.org>`_ 

305* `PyLSM <https://launchpad.net/pylsm>`_ 

306* `PyMca.TiffIO.py <https://github.com/vasole/pymca>`_ (same as fabio.TiffIO) 

307* `BioImageXD.Readers <http://www.bioimagexd.net/>`_ 

308* `Cellcognition.io <http://cellcognition.org/>`_ 

309* `pymimage <https://github.com/ardoi/pymimage>`_ 

310* `pytiff <https://github.com/FZJ-INM1-BDA/pytiff>`_ 

311 

312Acknowledgements 

313---------------- 

314* Egor Zindy, University of Manchester, for lsm_scan_info specifics. 

315* Wim Lewis for a bug fix and some LSM functions. 

316* Hadrien Mary for help on reading MicroManager files. 

317* Christian Kliche for help writing tiled and color-mapped files. 

318 

319References 

320---------- 

3211) TIFF 6.0 Specification and Supplements. Adobe Systems Incorporated. 

322 http://partners.adobe.com/public/developer/tiff/ 

3232) TIFF File Format FAQ. http://www.awaresystems.be/imaging/tiff/faq.html 

3243) MetaMorph Stack (STK) Image File Format. 

325 http://support.meta.moleculardevices.com/docs/t10243.pdf 

3264) Image File Format Description LSM 5/7 Release 6.0 (ZEN 2010). 

327 Carl Zeiss MicroImaging GmbH. BioSciences. May 10, 2011 

3285) The OME-TIFF format. 

329 http://www.openmicroscopy.org/site/support/file-formats/ome-tiff 

3306) UltraQuant(r) Version 6.0 for Windows Start-Up Guide. 

331 http://www.ultralum.com/images%20ultralum/pdf/UQStart%20Up%20Guide.pdf 

3327) Micro-Manager File Formats. 

333 http://www.micro-manager.org/wiki/Micro-Manager_File_Formats 

3348) Tags for TIFF and Related Specifications. Digital Preservation. 

335 http://www.digitalpreservation.gov/formats/content/tiff_tags.shtml 

3369) ScanImage BigTiff Specification - ScanImage 2016. 

337 http://scanimage.vidriotechnologies.com/display/SI2016/ 

338 ScanImage+BigTiff+Specification 

33910) CIPA DC-008-2016: Exchangeable image file format for digital still cameras: 

340 Exif Version 2.31. 

341 http://www.cipa.jp/std/documents/e/DC-008-Translation-2016-E.pdf 

342 

343Examples 

344-------- 

345>>> # write numpy array to TIFF file 

346>>> data = numpy.random.rand(4, 301, 219) 

347>>> imsave('temp.tif', data, photometric='minisblack') 

348 

349>>> # read numpy array from TIFF file 

350>>> image = imread('temp.tif') 

351>>> numpy.testing.assert_array_equal(image, data) 

352 

353>>> # iterate over pages and tags in TIFF file 

354>>> with TiffFile('temp.tif') as tif: 

355... images = tif.asarray() 

356... for page in tif.pages: 

357... for tag in page.tags.values(): 

358... _ = tag.name, tag.value 

359... image = page.asarray() 

360 

361""" 

362 

363from __future__ import division, print_function 

364 

365import sys 

366import os 

367import io 

368import re 

369import glob 

370import math 

371import zlib 

372import time 

373import json 

374import enum 

375import struct 

376import pathlib 

377import warnings 

378import binascii 

379import tempfile 

380import datetime 

381import threading 

382import collections 

383import multiprocessing 

384import concurrent.futures 

385 

386import numpy 

387 

388# delay imports: mmap, pprint, fractions, xml, tkinter, matplotlib, lzma, zstd, 

389# subprocess 

390 

391__version__ = "2018.06.15" 

392__docformat__ = "restructuredtext en" 

393__all__ = ( 

394 "imsave", 

395 "imread", 

396 "imshow", 

397 "memmap", 

398 "TiffFile", 

399 "TiffWriter", 

400 "TiffSequence", 

401 # utility functions used by oiffile or czifile 

402 "FileHandle", 

403 "lazyattr", 

404 "natural_sorted", 

405 "decode_lzw", 

406 "stripnull", 

407 "create_output", 

408 "repeat_nd", 

409 "format_size", 

410 "product", 

411 "xml2dict", 

412) 

413 

414 

415def imread(files, **kwargs): 

416 """Return image data from TIFF file(s) as numpy array. 

417 

418 Refer to the TiffFile class and member functions for documentation. 

419 

420 Parameters 

421 ---------- 

422 files : str, binary stream, or sequence 

423 File name, seekable binary stream, glob pattern, or sequence of 

424 file names. 

425 kwargs : dict 

426 Parameters 'multifile' and 'is_ome' are passed to the TiffFile class. 

427 The 'pattern' parameter is passed to the TiffSequence class. 

428 Other parameters are passed to the asarray functions. 

429 The first image series is returned if no arguments are provided. 

430 

431 Examples 

432 -------- 

433 >>> # get image from first page 

434 >>> imsave('temp.tif', numpy.random.rand(3, 4, 301, 219)) 

435 >>> im = imread('temp.tif', key=0) 

436 >>> im.shape 

437 (4, 301, 219) 

438 

439 >>> # get images from sequence of files 

440 >>> ims = imread(['temp.tif', 'temp.tif']) 

441 >>> ims.shape 

442 (2, 3, 4, 301, 219) 

443 

444 """ 

445 kwargs_file = parse_kwargs(kwargs, "multifile", "is_ome") 

446 kwargs_seq = parse_kwargs(kwargs, "pattern") 

447 

448 if isinstance(files, basestring) and any(i in files for i in "?*"): 

449 files = glob.glob(files) 

450 if not files: 

451 raise ValueError("no files found") 

452 if not hasattr(files, "seek") and len(files) == 1: 

453 files = files[0] 

454 

455 if isinstance(files, basestring) or hasattr(files, "seek"): 

456 with TiffFile(files, **kwargs_file) as tif: 

457 return tif.asarray(**kwargs) 

458 else: 

459 with TiffSequence(files, **kwargs_seq) as imseq: 

460 return imseq.asarray(**kwargs) 

461 

462 

463def imsave(file, data=None, shape=None, dtype=None, bigsize=2**32 - 2**25, **kwargs): 

464 """Write numpy array to TIFF file. 

465 

466 Refer to the TiffWriter class and member functions for documentation. 

467 

468 Parameters 

469 ---------- 

470 file : str or binary stream 

471 File name or writable binary stream, such as an open file or BytesIO. 

472 data : array_like 

473 Input image. The last dimensions are assumed to be image depth, 

474 height, width, and samples. 

475 If None, an empty array of the specified shape and dtype is 

476 saved to file. 

477 Unless 'byteorder' is specified in 'kwargs', the TIFF file byte order 

478 is determined from the data's dtype or the dtype argument. 

479 shape : tuple 

480 If 'data' is None, shape of an empty array to save to the file. 

481 dtype : numpy.dtype 

482 If 'data' is None, data-type of an empty array to save to the file. 

483 bigsize : int 

484 Create a BigTIFF file if the size of data in bytes is larger than 

485 this threshold and 'imagej' or 'truncate' are not enabled. 

486 By default, the threshold is 4 GB minus 32 MB reserved for metadata. 

487 Use the 'bigtiff' parameter to explicitly specify the type of 

488 file created. 

489 kwargs : dict 

490 Parameters 'append', 'byteorder', 'bigtiff', and 'imagej', are passed 

491 to TiffWriter(). Other parameters are passed to TiffWriter.save(). 

492 

493 Returns 

494 ------- 

495 If the image data are written contiguously, return offset and bytecount 

496 of image data in the file. 

497 

498 Examples 

499 -------- 

500 >>> # save a RGB image 

501 >>> data = numpy.random.randint(0, 255, (256, 256, 3), 'uint8') 

502 >>> imsave('temp.tif', data, photometric='rgb') 

503 

504 >>> # save a random array and metadata, using compression 

505 >>> data = numpy.random.rand(2, 5, 3, 301, 219) 

506 >>> imsave('temp.tif', data, compress=6, metadata={'axes': 'TZCYX'}) 

507 

508 """ 

509 tifargs = parse_kwargs(kwargs, "append", "bigtiff", "byteorder", "imagej") 

510 if data is None: 

511 size = product(shape) * numpy.dtype(dtype).itemsize 

512 byteorder = numpy.dtype(dtype).byteorder 

513 else: 

514 try: 

515 size = data.nbytes 

516 byteorder = data.dtype.byteorder 

517 except Exception: 

518 size = 0 

519 byteorder = None 

520 if ( 

521 size > bigsize 

522 and "bigtiff" not in tifargs 

523 and not (tifargs.get("imagej", False) or tifargs.get("truncate", False)) 

524 ): 

525 tifargs["bigtiff"] = True 

526 if "byteorder" not in tifargs: 

527 tifargs["byteorder"] = byteorder 

528 

529 with TiffWriter(file, **tifargs) as tif: 

530 return tif.save(data, shape, dtype, **kwargs) 

531 

532 

533def memmap(filename, shape=None, dtype=None, page=None, series=0, mode="r+", **kwargs): 

534 """Return memory-mapped numpy array stored in TIFF file. 

535 

536 Memory-mapping requires data stored in native byte order, without tiling, 

537 compression, predictors, etc. 

538 If 'shape' and 'dtype' are provided, existing files will be overwritten or 

539 appended to depending on the 'append' parameter. 

540 Otherwise the image data of a specified page or series in an existing 

541 file will be memory-mapped. By default, the image data of the first page 

542 series is memory-mapped. 

543 Call flush() to write any changes in the array to the file. 

544 Raise ValueError if the image data in the file is not memory-mappable. 

545 

546 Parameters 

547 ---------- 

548 filename : str 

549 Name of the TIFF file which stores the array. 

550 shape : tuple 

551 Shape of the empty array. 

552 dtype : numpy.dtype 

553 Data-type of the empty array. 

554 page : int 

555 Index of the page which image data to memory-map. 

556 series : int 

557 Index of the page series which image data to memory-map. 

558 mode : {'r+', 'r', 'c'}, optional 

559 The file open mode. Default is to open existing file for reading and 

560 writing ('r+'). 

561 kwargs : dict 

562 Additional parameters passed to imsave() or TiffFile(). 

563 

564 Examples 

565 -------- 

566 >>> # create an empty TIFF file and write to memory-mapped image 

567 >>> im = memmap('temp.tif', shape=(256, 256), dtype='float32') 

568 >>> im[255, 255] = 1.0 

569 >>> im.flush() 

570 >>> im.shape, im.dtype 

571 ((256, 256), dtype('float32')) 

572 >>> del im 

573 

574 >>> # memory-map image data in a TIFF file 

575 >>> im = memmap('temp.tif', page=0) 

576 >>> im[255, 255] 

577 1.0 

578 

579 """ 

580 if shape is not None and dtype is not None: 

581 # create a new, empty array 

582 kwargs.update( 

583 data=None, 

584 shape=shape, 

585 dtype=dtype, 

586 returnoffset=True, 

587 align=TIFF.ALLOCATIONGRANULARITY, 

588 ) 

589 result = imsave(filename, **kwargs) 

590 if result is None: 

591 # TODO: fail before creating file or writing data 

592 raise ValueError("image data are not memory-mappable") 

593 offset = result[0] 

594 else: 

595 # use existing file 

596 with TiffFile(filename, **kwargs) as tif: 

597 if page is not None: 

598 page = tif.pages[page] 

599 if not page.is_memmappable: 

600 raise ValueError("image data are not memory-mappable") 

601 offset, _ = page.is_contiguous 

602 shape = page.shape 

603 dtype = page.dtype 

604 else: 

605 series = tif.series[series] 

606 if series.offset is None: 

607 raise ValueError("image data are not memory-mappable") 

608 shape = series.shape 

609 dtype = series.dtype 

610 offset = series.offset 

611 dtype = tif.byteorder + dtype.char 

612 return numpy.memmap(filename, dtype, mode, offset, shape, "C") 

613 

614 

615class lazyattr(object): 

616 """Attribute whose value is computed on first access.""" 

617 

618 # TODO: help() doesn't work 

619 __slots__ = ("func",) 

620 

621 def __init__(self, func): 

622 self.func = func 

623 # self.__name__ = func.__name__ 

624 # self.__doc__ = func.__doc__ 

625 # self.lock = threading.RLock() 

626 

627 def __get__(self, instance, owner): 

628 # with self.lock: 

629 if instance is None: 

630 return self 

631 try: 

632 value = self.func(instance) 

633 except AttributeError as e: 

634 raise RuntimeError(e) 

635 if value is NotImplemented: 

636 return getattr(super(owner, instance), self.func.__name__) 

637 setattr(instance, self.func.__name__, value) 

638 return value 

639 

640 

641class TiffWriter(object): 

642 """Write numpy arrays to TIFF file. 

643 

644 TiffWriter instances must be closed using the 'close' method, which is 

645 automatically called when using the 'with' context manager. 

646 

647 TiffWriter's main purpose is saving nD numpy array's as TIFF, 

648 not to create any possible TIFF format. Specifically, JPEG compression, 

649 SubIFDs, ExifIFD, or GPSIFD tags are not supported. 

650 

651 Examples 

652 -------- 

653 >>> # successively append images to BigTIFF file 

654 >>> data = numpy.random.rand(2, 5, 3, 301, 219) 

655 >>> with TiffWriter('temp.tif', bigtiff=True) as tif: 

656 ... for i in range(data.shape[0]): 

657 ... tif.save(data[i], compress=6, photometric='minisblack') 

658 

659 """ 

660 

661 def __init__(self, file, bigtiff=False, byteorder=None, append=False, imagej=False): 

662 """Open a TIFF file for writing. 

663 

664 An empty TIFF file is created if the file does not exist, else the 

665 file is overwritten with an empty TIFF file unless 'append' 

666 is true. Use bigtiff=True when creating files larger than 4 GB. 

667 

668 Parameters 

669 ---------- 

670 file : str, binary stream, or FileHandle 

671 File name or writable binary stream, such as an open file 

672 or BytesIO. 

673 bigtiff : bool 

674 If True, the BigTIFF format is used. 

675 byteorder : {'<', '>', '=', '|'} 

676 The endianness of the data in the file. 

677 By default, this is the system's native byte order. 

678 append : bool 

679 If True and 'file' is an existing standard TIFF file, image data 

680 and tags are appended to the file. 

681 Appending data may corrupt specifically formatted TIFF files 

682 such as LSM, STK, ImageJ, NIH, or FluoView. 

683 imagej : bool 

684 If True, write an ImageJ hyperstack compatible file. 

685 This format can handle data types uint8, uint16, or float32 and 

686 data shapes up to 6 dimensions in TZCYXS order. 

687 RGB images (S=3 or S=4) must be uint8. 

688 ImageJ's default byte order is big-endian but this implementation 

689 uses the system's native byte order by default. 

690 ImageJ does not support BigTIFF format or LZMA compression. 

691 The ImageJ file format is undocumented. 

692 

693 """ 

694 if append: 

695 # determine if file is an existing TIFF file that can be extended 

696 try: 

697 with FileHandle(file, mode="rb", size=0) as fh: 

698 pos = fh.tell() 

699 try: 

700 with TiffFile(fh) as tif: 

701 if append != "force" and any( 

702 getattr(tif, "is_" + a) 

703 for a in ( 

704 "lsm", 

705 "stk", 

706 "imagej", 

707 "nih", 

708 "fluoview", 

709 "micromanager", 

710 ) 

711 ): 

712 raise ValueError("file contains metadata") 

713 byteorder = tif.byteorder 

714 bigtiff = tif.is_bigtiff 

715 self._ifdoffset = tif.pages.next_page_offset 

716 except Exception as e: 

717 raise ValueError("cannot append to file: %s" % str(e)) 

718 finally: 

719 fh.seek(pos) 

720 except (IOError, FileNotFoundError): 

721 append = False 

722 

723 if byteorder in (None, "=", "|"): 

724 byteorder = "<" if sys.byteorder == "little" else ">" 

725 elif byteorder not in ("<", ">"): 

726 raise ValueError("invalid byteorder %s" % byteorder) 

727 if imagej and bigtiff: 

728 warnings.warn("writing incompatible BigTIFF ImageJ") 

729 

730 self._byteorder = byteorder 

731 self._imagej = bool(imagej) 

732 self._truncate = False 

733 self._metadata = None 

734 self._colormap = None 

735 

736 self._descriptionoffset = 0 

737 self._descriptionlen = 0 

738 self._descriptionlenoffset = 0 

739 self._tags = None 

740 self._shape = None # normalized shape of data in consecutive pages 

741 self._datashape = None # shape of data in consecutive pages 

742 self._datadtype = None # data type 

743 self._dataoffset = None # offset to data 

744 self._databytecounts = None # byte counts per plane 

745 self._tagoffsets = None # strip or tile offset tag code 

746 

747 if bigtiff: 

748 self._bigtiff = True 

749 self._offsetsize = 8 

750 self._tagsize = 20 

751 self._tagnoformat = "Q" 

752 self._offsetformat = "Q" 

753 self._valueformat = "8s" 

754 else: 

755 self._bigtiff = False 

756 self._offsetsize = 4 

757 self._tagsize = 12 

758 self._tagnoformat = "H" 

759 self._offsetformat = "I" 

760 self._valueformat = "4s" 

761 

762 if append: 

763 self._fh = FileHandle(file, mode="r+b", size=0) 

764 self._fh.seek(0, 2) 

765 else: 

766 self._fh = FileHandle(file, mode="wb", size=0) 

767 self._fh.write({"<": b"II", ">": b"MM"}[byteorder]) 

768 if bigtiff: 

769 self._fh.write(struct.pack(byteorder + "HHH", 43, 8, 0)) 

770 else: 

771 self._fh.write(struct.pack(byteorder + "H", 42)) 

772 # first IFD 

773 self._ifdoffset = self._fh.tell() 

774 self._fh.write(struct.pack(byteorder + self._offsetformat, 0)) 

775 

776 def save( 

777 self, 

778 data=None, 

779 shape=None, 

780 dtype=None, 

781 returnoffset=False, 

782 photometric=None, 

783 planarconfig=None, 

784 tile=None, 

785 contiguous=True, 

786 align=16, 

787 truncate=False, 

788 compress=0, 

789 rowsperstrip=None, 

790 predictor=False, 

791 colormap=None, 

792 description=None, 

793 datetime=None, 

794 resolution=None, 

795 software="tifffile.py", 

796 metadata={}, 

797 ijmetadata=None, 

798 extratags=(), 

799 ): 

800 """Write numpy array and tags to TIFF file. 

801 

802 The data shape's last dimensions are assumed to be image depth, 

803 height (length), width, and samples. 

804 If a colormap is provided, the data's dtype must be uint8 or uint16 

805 and the data values are indices into the last dimension of the 

806 colormap. 

807 If 'shape' and 'dtype' are specified, an empty array is saved. 

808 This option cannot be used with compression or multiple tiles. 

809 Image data are written uncompressed in one strip per plane by default. 

810 Dimensions larger than 2 to 4 (depending on photometric mode, planar 

811 configuration, and SGI mode) are flattened and saved as separate pages. 

812 The SampleFormat and BitsPerSample tags are derived from the data type. 

813 

814 Parameters 

815 ---------- 

816 data : numpy.ndarray or None 

817 Input image array. 

818 shape : tuple or None 

819 Shape of the empty array to save. Used only if 'data' is None. 

820 dtype : numpy.dtype or None 

821 Data-type of the empty array to save. Used only if 'data' is None. 

822 returnoffset : bool 

823 If True and the image data in the file is memory-mappable, return 

824 the offset and number of bytes of the image data in the file. 

825 photometric : {'MINISBLACK', 'MINISWHITE', 'RGB', 'PALETTE', 'CFA'} 

826 The color space of the image data. 

827 By default, this setting is inferred from the data shape and the 

828 value of colormap. 

829 For CFA images, DNG tags must be specified in 'extratags'. 

830 planarconfig : {'CONTIG', 'SEPARATE'} 

831 Specifies if samples are stored contiguous or in separate planes. 

832 By default, this setting is inferred from the data shape. 

833 If this parameter is set, extra samples are used to store grayscale 

834 images. 

835 'CONTIG': last dimension contains samples. 

836 'SEPARATE': third last dimension contains samples. 

837 tile : tuple of int 

838 The shape (depth, length, width) of image tiles to write. 

839 If None (default), image data are written in strips. 

840 The tile length and width must be a multiple of 16. 

841 If the tile depth is provided, the SGI ImageDepth and TileDepth 

842 tags are used to save volume data. 

843 Unless a single tile is used, tiles cannot be used to write 

844 contiguous files. 

845 Few software can read the SGI format, e.g. MeVisLab. 

846 contiguous : bool 

847 If True (default) and the data and parameters are compatible with 

848 previous ones, if any, the image data are stored contiguously after 

849 the previous one. Parameters 'photometric' and 'planarconfig' 

850 are ignored. Parameters 'description', datetime', and 'extratags' 

851 are written to the first page of a contiguous series only. 

852 align : int 

853 Byte boundary on which to align the image data in the file. 

854 Default 16. Use mmap.ALLOCATIONGRANULARITY for memory-mapped data. 

855 Following contiguous writes are not aligned. 

856 truncate : bool 

857 If True, only write the first page including shape metadata if 

858 possible (uncompressed, contiguous, not tiled). 

859 Other TIFF readers will only be able to read part of the data. 

860 compress : int or 'LZMA', 'ZSTD' 

861 Values from 0 to 9 controlling the level of zlib compression. 

862 If 0 (default), data are written uncompressed. 

863 Compression cannot be used to write contiguous files. 

864 If 'LZMA' or 'ZSTD', LZMA or ZSTD compression is used, which is 

865 not available on all platforms. 

866 rowsperstrip : int 

867 The number of rows per strip used for compression. 

868 Uncompressed data are written in one strip per plane. 

869 predictor : bool 

870 If True, apply horizontal differencing to integer type images 

871 before compression. 

872 colormap : numpy.ndarray 

873 RGB color values for the corresponding data value. 

874 Must be of shape (3, 2**(data.itemsize*8)) and dtype uint16. 

875 description : str 

876 The subject of the image. Must be 7-bit ASCII. Cannot be used with 

877 the ImageJ format. Saved with the first page only. 

878 datetime : datetime 

879 Date and time of image creation in '%Y:%m:%d %H:%M:%S' format. 

880 If None (default), the current date and time is used. 

881 Saved with the first page only. 

882 resolution : (float, float[, str]) or ((int, int), (int, int)[, str]) 

883 X and Y resolutions in pixels per resolution unit as float or 

884 rational numbers. A third, optional parameter specifies the 

885 resolution unit, which must be None (default for ImageJ), 

886 'INCH' (default), or 'CENTIMETER'. 

887 software : str 

888 Name of the software used to create the file. Must be 7-bit ASCII. 

889 Saved with the first page only. 

890 metadata : dict 

891 Additional meta data to be saved along with shape information 

892 in JSON or ImageJ formats in an ImageDescription tag. 

893 If None, do not write a second ImageDescription tag. 

894 Strings must be 7-bit ASCII. Saved with the first page only. 

895 ijmetadata : dict 

896 Additional meta data to be saved in application specific 

897 IJMetadata and IJMetadataByteCounts tags. Refer to the 

898 imagej_metadata_tags function for valid keys and values. 

899 Saved with the first page only. 

900 extratags : sequence of tuples 

901 Additional tags as [(code, dtype, count, value, writeonce)]. 

902 

903 code : int 

904 The TIFF tag Id. 

905 dtype : str 

906 Data type of items in 'value' in Python struct format. 

907 One of B, s, H, I, 2I, b, h, i, 2i, f, d, Q, or q. 

908 count : int 

909 Number of data values. Not used for string or byte string 

910 values. 

911 value : sequence 

912 'Count' values compatible with 'dtype'. 

913 Byte strings must contain count values of dtype packed as 

914 binary data. 

915 writeonce : bool 

916 If True, the tag is written to the first page only. 

917 

918 """ 

919 # TODO: refactor this function 

920 fh = self._fh 

921 byteorder = self._byteorder 

922 

923 if data is None: 

924 if compress: 

925 raise ValueError("cannot save compressed empty file") 

926 datashape = shape 

927 datadtype = numpy.dtype(dtype).newbyteorder(byteorder) 

928 datadtypechar = datadtype.char 

929 else: 

930 data = numpy.asarray(data, byteorder + data.dtype.char, "C") 

931 if data.size == 0: 

932 raise ValueError("cannot save empty array") 

933 datashape = data.shape 

934 datadtype = data.dtype 

935 datadtypechar = data.dtype.char 

936 

937 returnoffset = returnoffset and datadtype.isnative 

938 bilevel = datadtypechar == "?" 

939 if bilevel: 

940 index = -1 if datashape[-1] > 1 else -2 

941 datasize = product(datashape[:index]) 

942 if datashape[index] % 8: 

943 datasize *= datashape[index] // 8 + 1 

944 else: 

945 datasize *= datashape[index] // 8 

946 else: 

947 datasize = product(datashape) * datadtype.itemsize 

948 

949 # just append contiguous data if possible 

950 self._truncate = bool(truncate) 

951 if self._datashape: 

952 if ( 

953 not contiguous 

954 or self._datashape[1:] != datashape 

955 or self._datadtype != datadtype 

956 or (compress and self._tags) 

957 or tile 

958 or not numpy.array_equal(colormap, self._colormap) 

959 ): 

960 # incompatible shape, dtype, compression mode, or colormap 

961 self._write_remaining_pages() 

962 self._write_image_description() 

963 self._truncate = False 

964 self._descriptionoffset = 0 

965 self._descriptionlenoffset = 0 

966 self._datashape = None 

967 self._colormap = None 

968 if self._imagej: 

969 raise ValueError("ImageJ does not support non-contiguous data") 

970 else: 

971 # consecutive mode 

972 self._datashape = (self._datashape[0] + 1,) + datashape 

973 if not compress: 

974 # write contiguous data, write IFDs/tags later 

975 offset = fh.tell() 

976 if data is None: 

977 fh.write_empty(datasize) 

978 else: 

979 fh.write_array(data) 

980 if returnoffset: 

981 return offset, datasize 

982 return 

983 

984 input_shape = datashape 

985 tagnoformat = self._tagnoformat 

986 valueformat = self._valueformat 

987 offsetformat = self._offsetformat 

988 offsetsize = self._offsetsize 

989 tagsize = self._tagsize 

990 

991 MINISBLACK = TIFF.PHOTOMETRIC.MINISBLACK 

992 RGB = TIFF.PHOTOMETRIC.RGB 

993 CFA = TIFF.PHOTOMETRIC.CFA 

994 PALETTE = TIFF.PHOTOMETRIC.PALETTE 

995 CONTIG = TIFF.PLANARCONFIG.CONTIG 

996 SEPARATE = TIFF.PLANARCONFIG.SEPARATE 

997 

998 # parse input 

999 if photometric is not None: 

1000 photometric = enumarg(TIFF.PHOTOMETRIC, photometric) 

1001 if planarconfig: 

1002 planarconfig = enumarg(TIFF.PLANARCONFIG, planarconfig) 

1003 if not compress: 

1004 compress = False 

1005 compresstag = 1 

1006 predictor = False 

1007 else: 

1008 if isinstance(compress, (tuple, list)): 

1009 compress, compresslevel = compress 

1010 elif isinstance(compress, int): 

1011 compress, compresslevel = "ADOBE_DEFLATE", int(compress) 

1012 if not 0 <= compresslevel <= 9: 

1013 raise ValueError("invalid compression level %s" % compress) 

1014 else: 

1015 compresslevel = None 

1016 compress = compress.upper() 

1017 compresstag = enumarg(TIFF.COMPRESSION, compress) 

1018 

1019 # prepare ImageJ format 

1020 if self._imagej: 

1021 if compress in ("LZMA", "ZSTD"): 

1022 raise ValueError("ImageJ cannot handle LZMA or ZSTD compression") 

1023 if description: 

1024 warnings.warn("not writing description to ImageJ file") 

1025 description = None 

1026 volume = False 

1027 if datadtypechar not in "BHhf": 

1028 raise ValueError("ImageJ does not support data type %s" % datadtypechar) 

1029 ijrgb = photometric == RGB if photometric else None 

1030 if datadtypechar not in "B": 

1031 ijrgb = False 

1032 ijshape = imagej_shape(datashape, ijrgb) 

1033 if ijshape[-1] in (3, 4): 

1034 photometric = RGB 

1035 if datadtypechar not in "B": 

1036 raise ValueError( 

1037 "ImageJ does not support data type %s " 

1038 "for RGB" % datadtypechar 

1039 ) 

1040 elif photometric is None: 

1041 photometric = MINISBLACK 

1042 planarconfig = None 

1043 if planarconfig == SEPARATE: 

1044 raise ValueError("ImageJ does not support planar images") 

1045 else: 

1046 planarconfig = CONTIG if ijrgb else None 

1047 

1048 # define compress function 

1049 if compress: 

1050 if compresslevel is None: 

1051 compressor, compresslevel = TIFF.COMPESSORS[compresstag] 

1052 else: 

1053 compressor, _ = TIFF.COMPESSORS[compresstag] 

1054 compresslevel = int(compresslevel) 

1055 if predictor: 

1056 if datadtype.kind not in "iu": 

1057 raise ValueError("prediction not implemented for %s" % datadtype) 

1058 

1059 def compress(data, level=compresslevel): 

1060 # horizontal differencing 

1061 diff = numpy.diff(data, axis=-2) 

1062 data = numpy.insert(diff, 0, data[..., 0, :], axis=-2) 

1063 return compressor(data, level) 

1064 

1065 else: 

1066 

1067 def compress(data, level=compresslevel): 

1068 return compressor(data, level) 

1069 

1070 # verify colormap and indices 

1071 if colormap is not None: 

1072 if datadtypechar not in "BH": 

1073 raise ValueError("invalid data dtype for palette mode") 

1074 colormap = numpy.asarray(colormap, dtype=byteorder + "H") 

1075 if colormap.shape != (3, 2 ** (datadtype.itemsize * 8)): 

1076 raise ValueError("invalid color map shape") 

1077 self._colormap = colormap 

1078 

1079 # verify tile shape 

1080 if tile: 

1081 tile = tuple(int(i) for i in tile[:3]) 

1082 volume = len(tile) == 3 

1083 if ( 

1084 len(tile) < 2 

1085 or tile[-1] % 16 

1086 or tile[-2] % 16 

1087 or any(i < 1 for i in tile) 

1088 ): 

1089 raise ValueError("invalid tile shape") 

1090 else: 

1091 tile = () 

1092 volume = False 

1093 

1094 # normalize data shape to 5D or 6D, depending on volume: 

1095 # (pages, planar_samples, [depth,] height, width, contig_samples) 

1096 datashape = reshape_nd(datashape, 3 if photometric == RGB else 2) 

1097 shape = datashape 

1098 ndim = len(datashape) 

1099 

1100 samplesperpixel = 1 

1101 extrasamples = 0 

1102 if volume and ndim < 3: 

1103 volume = False 

1104 if colormap is not None: 

1105 photometric = PALETTE 

1106 planarconfig = None 

1107 if photometric is None: 

1108 photometric = MINISBLACK 

1109 if bilevel: 

1110 photometric = TIFF.PHOTOMETRIC.MINISWHITE 

1111 elif planarconfig == CONTIG: 

1112 if ndim > 2 and shape[-1] in (3, 4): 

1113 photometric = RGB 

1114 elif planarconfig == SEPARATE: 

1115 if volume and ndim > 3 and shape[-4] in (3, 4): 

1116 photometric = RGB 

1117 elif ndim > 2 and shape[-3] in (3, 4): 

1118 photometric = RGB 

1119 elif ndim > 2 and shape[-1] in (3, 4): 

1120 photometric = RGB 

1121 elif self._imagej: 

1122 photometric = MINISBLACK 

1123 elif volume and ndim > 3 and shape[-4] in (3, 4): 

1124 photometric = RGB 

1125 elif ndim > 2 and shape[-3] in (3, 4): 

1126 photometric = RGB 

1127 if planarconfig and len(shape) <= (3 if volume else 2): 

1128 planarconfig = None 

1129 photometric = MINISBLACK 

1130 if photometric == RGB: 

1131 if len(shape) < 3: 

1132 raise ValueError("not a RGB(A) image") 

1133 if len(shape) < 4: 

1134 volume = False 

1135 if planarconfig is None: 

1136 if shape[-1] in (3, 4): 

1137 planarconfig = CONTIG 

1138 elif shape[-4 if volume else -3] in (3, 4): 

1139 planarconfig = SEPARATE 

1140 elif shape[-1] > shape[-4 if volume else -3]: 

1141 planarconfig = SEPARATE 

1142 else: 

1143 planarconfig = CONTIG 

1144 if planarconfig == CONTIG: 

1145 datashape = (-1, 1) + shape[(-4 if volume else -3) :] 

1146 samplesperpixel = datashape[-1] 

1147 else: 

1148 datashape = (-1,) + shape[(-4 if volume else -3) :] + (1,) 

1149 samplesperpixel = datashape[1] 

1150 if samplesperpixel > 3: 

1151 extrasamples = samplesperpixel - 3 

1152 elif photometric == CFA: 

1153 if len(shape) != 2: 

1154 raise ValueError("invalid CFA image") 

1155 volume = False 

1156 planarconfig = None 

1157 datashape = (-1, 1) + shape[-2:] + (1,) 

1158 if 50706 not in (et[0] for et in extratags): 

1159 raise ValueError("must specify DNG tags for CFA image") 

1160 elif planarconfig and len(shape) > (3 if volume else 2): 

1161 if planarconfig == CONTIG: 

1162 datashape = (-1, 1) + shape[(-4 if volume else -3) :] 

1163 samplesperpixel = datashape[-1] 

1164 else: 

1165 datashape = (-1,) + shape[(-4 if volume else -3) :] + (1,) 

1166 samplesperpixel = datashape[1] 

1167 extrasamples = samplesperpixel - 1 

1168 else: 

1169 planarconfig = None 

1170 # remove trailing 1s 

1171 while len(shape) > 2 and shape[-1] == 1: 

1172 shape = shape[:-1] 

1173 if len(shape) < 3: 

1174 volume = False 

1175 datashape = (-1, 1) + shape[(-3 if volume else -2) :] + (1,) 

1176 

1177 # normalize shape to 6D 

1178 assert len(datashape) in (5, 6) 

1179 if len(datashape) == 5: 

1180 datashape = datashape[:2] + (1,) + datashape[2:] 

1181 if datashape[0] == -1: 

1182 s0 = product(input_shape) // product(datashape[1:]) 

1183 datashape = (s0,) + datashape[1:] 

1184 shape = datashape 

1185 if data is not None: 

1186 data = data.reshape(shape) 

1187 

1188 if tile and not volume: 

1189 tile = (1, tile[-2], tile[-1]) 

1190 

1191 if photometric == PALETTE: 

1192 if samplesperpixel != 1 or extrasamples or shape[1] != 1 or shape[-1] != 1: 

1193 raise ValueError("invalid data shape for palette mode") 

1194 

1195 if photometric == RGB and samplesperpixel == 2: 

1196 raise ValueError("not a RGB image (samplesperpixel=2)") 

1197 

1198 if bilevel: 

1199 if compress: 

1200 raise ValueError("cannot save compressed bilevel image") 

1201 if tile: 

1202 raise ValueError("cannot save tiled bilevel image") 

1203 if photometric not in (0, 1): 

1204 raise ValueError("cannot save bilevel image as %s" % str(photometric)) 

1205 datashape = list(datashape) 

1206 if datashape[-2] % 8: 

1207 datashape[-2] = datashape[-2] // 8 + 1 

1208 else: 

1209 datashape[-2] = datashape[-2] // 8 

1210 datashape = tuple(datashape) 

1211 assert datasize == product(datashape) 

1212 if data is not None: 

1213 data = numpy.packbits(data, axis=-2) 

1214 assert datashape[-2] == data.shape[-2] 

1215 

1216 bytestr = ( 

1217 bytes 

1218 if sys.version[0] == "2" 

1219 else (lambda x: bytes(x, "ascii") if isinstance(x, str) else x) 

1220 ) 

1221 tags = [] # list of (code, ifdentry, ifdvalue, writeonce) 

1222 

1223 strip_or_tile = "Tile" if tile else "Strip" 

1224 tagbytecounts = TIFF.TAG_NAMES[strip_or_tile + "ByteCounts"] 

1225 tag_offsets = TIFF.TAG_NAMES[strip_or_tile + "Offsets"] 

1226 self._tagoffsets = tag_offsets 

1227 

1228 def pack(fmt, *val): 

1229 return struct.pack(byteorder + fmt, *val) 

1230 

1231 def addtag(code, dtype, count, value, writeonce=False): 

1232 # Compute ifdentry & ifdvalue bytes from code, dtype, count, value 

1233 # Append (code, ifdentry, ifdvalue, writeonce) to tags list 

1234 code = int(TIFF.TAG_NAMES.get(code, code)) 

1235 try: 

1236 tifftype = TIFF.DATA_DTYPES[dtype] 

1237 except KeyError: 

1238 raise ValueError("unknown dtype %s" % dtype) 

1239 rawcount = count 

1240 

1241 if dtype == "s": 

1242 # strings 

1243 value = bytestr(value) + b"\0" 

1244 count = rawcount = len(value) 

1245 rawcount = value.find(b"\0\0") 

1246 if rawcount < 0: 

1247 rawcount = count 

1248 else: 

1249 rawcount += 1 # length of string without buffer 

1250 value = (value,) 

1251 elif isinstance(value, bytes): 

1252 # packed binary data 

1253 dtsize = struct.calcsize(dtype) 

1254 if len(value) % dtsize: 

1255 raise ValueError("invalid packed binary data") 

1256 count = len(value) // dtsize 

1257 if len(dtype) > 1: 

1258 count *= int(dtype[:-1]) 

1259 dtype = dtype[-1] 

1260 ifdentry = [pack("HH", code, tifftype), pack(offsetformat, rawcount)] 

1261 ifdvalue = None 

1262 if struct.calcsize(dtype) * count <= offsetsize: 

1263 # value(s) can be written directly 

1264 if isinstance(value, bytes): 

1265 ifdentry.append(pack(valueformat, value)) 

1266 elif count == 1: 

1267 if isinstance(value, (tuple, list, numpy.ndarray)): 

1268 value = value[0] 

1269 ifdentry.append(pack(valueformat, pack(dtype, value))) 

1270 else: 

1271 ifdentry.append(pack(valueformat, pack(str(count) + dtype, *value))) 

1272 else: 

1273 # use offset to value(s) 

1274 ifdentry.append(pack(offsetformat, 0)) 

1275 if isinstance(value, bytes): 

1276 ifdvalue = value 

1277 elif isinstance(value, numpy.ndarray): 

1278 assert value.size == count 

1279 assert value.dtype.char == dtype 

1280 ifdvalue = value.tostring() 

1281 elif isinstance(value, (tuple, list)): 

1282 ifdvalue = pack(str(count) + dtype, *value) 

1283 else: 

1284 ifdvalue = pack(dtype, value) 

1285 tags.append((code, b"".join(ifdentry), ifdvalue, writeonce)) 

1286 

1287 def rational(arg, max_denominator=1000000): 

1288 """ "Return nominator and denominator from float or two integers.""" 

1289 from fractions import Fraction # delayed import 

1290 

1291 try: 

1292 f = Fraction.from_float(arg) 

1293 except TypeError: 

1294 f = Fraction(arg[0], arg[1]) 

1295 f = f.limit_denominator(max_denominator) 

1296 return f.numerator, f.denominator 

1297 

1298 if description: 

1299 # user provided description 

1300 addtag("ImageDescription", "s", 0, description, writeonce=True) 

1301 

1302 # write shape and metadata to ImageDescription 

1303 self._metadata = {} if not metadata else metadata.copy() 

1304 if self._imagej: 

1305 description = imagej_description( 

1306 input_shape, 

1307 shape[-1] in (3, 4), 

1308 self._colormap is not None, 

1309 **self._metadata 

1310 ) 

1311 elif metadata or metadata == {}: 

1312 if self._truncate: 

1313 self._metadata.update(truncated=True) 

1314 description = json_description(input_shape, **self._metadata) 

1315 else: 

1316 description = None 

1317 if description: 

1318 # add 64 bytes buffer 

1319 # the image description might be updated later with the final shape 

1320 description = str2bytes(description, "ascii") 

1321 description += b"\0" * 64 

1322 self._descriptionlen = len(description) 

1323 addtag("ImageDescription", "s", 0, description, writeonce=True) 

1324 

1325 if software: 

1326 addtag("Software", "s", 0, software, writeonce=True) 

1327 if datetime is None: 

1328 datetime = self._now() 

1329 addtag( 

1330 "DateTime", "s", 0, datetime.strftime("%Y:%m:%d %H:%M:%S"), writeonce=True 

1331 ) 

1332 addtag("Compression", "H", 1, compresstag) 

1333 if predictor: 

1334 addtag("Predictor", "H", 1, 2) 

1335 addtag("ImageWidth", "I", 1, shape[-2]) 

1336 addtag("ImageLength", "I", 1, shape[-3]) 

1337 if tile: 

1338 addtag("TileWidth", "I", 1, tile[-1]) 

1339 addtag("TileLength", "I", 1, tile[-2]) 

1340 if tile[0] > 1: 

1341 addtag("ImageDepth", "I", 1, shape[-4]) 

1342 addtag("TileDepth", "I", 1, tile[0]) 

1343 addtag("NewSubfileType", "I", 1, 0) 

1344 if not bilevel: 

1345 sampleformat = {"u": 1, "i": 2, "f": 3, "c": 6}[datadtype.kind] 

1346 addtag( 

1347 "SampleFormat", "H", samplesperpixel, (sampleformat,) * samplesperpixel 

1348 ) 

1349 addtag("PhotometricInterpretation", "H", 1, photometric.value) 

1350 if colormap is not None: 

1351 addtag("ColorMap", "H", colormap.size, colormap) 

1352 addtag("SamplesPerPixel", "H", 1, samplesperpixel) 

1353 if bilevel: 

1354 pass 

1355 elif planarconfig and samplesperpixel > 1: 

1356 addtag("PlanarConfiguration", "H", 1, planarconfig.value) 

1357 addtag( 

1358 "BitsPerSample", 

1359 "H", 

1360 samplesperpixel, 

1361 (datadtype.itemsize * 8,) * samplesperpixel, 

1362 ) 

1363 else: 

1364 addtag("BitsPerSample", "H", 1, datadtype.itemsize * 8) 

1365 if extrasamples: 

1366 if photometric == RGB and extrasamples == 1: 

1367 addtag("ExtraSamples", "H", 1, 1) # associated alpha channel 

1368 else: 

1369 addtag("ExtraSamples", "H", extrasamples, (0,) * extrasamples) 

1370 if resolution is not None: 

1371 addtag("XResolution", "2I", 1, rational(resolution[0])) 

1372 addtag("YResolution", "2I", 1, rational(resolution[1])) 

1373 if len(resolution) > 2: 

1374 unit = resolution[2] 

1375 unit = 1 if unit is None else enumarg(TIFF.RESUNIT, unit) 

1376 elif self._imagej: 

1377 unit = 1 

1378 else: 

1379 unit = 2 

1380 addtag("ResolutionUnit", "H", 1, unit) 

1381 elif not self._imagej: 

1382 addtag("XResolution", "2I", 1, (1, 1)) 

1383 addtag("YResolution", "2I", 1, (1, 1)) 

1384 addtag("ResolutionUnit", "H", 1, 1) 

1385 if ijmetadata: 

1386 for t in imagej_metadata_tags(ijmetadata, byteorder): 

1387 addtag(*t) 

1388 

1389 contiguous = not compress 

1390 if tile: 

1391 # one chunk per tile per plane 

1392 tiles = ( 

1393 (shape[2] + tile[0] - 1) // tile[0], 

1394 (shape[3] + tile[1] - 1) // tile[1], 

1395 (shape[4] + tile[2] - 1) // tile[2], 

1396 ) 

1397 numtiles = product(tiles) * shape[1] 

1398 stripbytecounts = [ 

1399 product(tile) * shape[-1] * datadtype.itemsize 

1400 ] * numtiles 

1401 addtag(tagbytecounts, offsetformat, numtiles, stripbytecounts) 

1402 addtag(tag_offsets, offsetformat, numtiles, [0] * numtiles) 

1403 contiguous = contiguous and product(tiles) == 1 

1404 if not contiguous: 

1405 # allocate tile buffer 

1406 chunk = numpy.empty(tile + (shape[-1],), dtype=datadtype) 

1407 elif contiguous: 

1408 # one strip per plane 

1409 if bilevel: 

1410 stripbytecounts = [product(datashape[2:])] * shape[1] 

1411 else: 

1412 stripbytecounts = [product(datashape[2:]) * datadtype.itemsize] * shape[ 

1413 1 

1414 ] 

1415 addtag(tagbytecounts, offsetformat, shape[1], stripbytecounts) 

1416 addtag(tag_offsets, offsetformat, shape[1], [0] * shape[1]) 

1417 addtag("RowsPerStrip", "I", 1, shape[-3]) 

1418 else: 

1419 # compress rowsperstrip or ~64 KB chunks 

1420 rowsize = product(shape[-2:]) * datadtype.itemsize 

1421 if rowsperstrip is None: 

1422 rowsperstrip = 65536 // rowsize 

1423 if rowsperstrip < 1: 

1424 rowsperstrip = 1 

1425 elif rowsperstrip > shape[-3]: 

1426 rowsperstrip = shape[-3] 

1427 addtag("RowsPerStrip", "I", 1, rowsperstrip) 

1428 

1429 numstrips = (shape[-3] + rowsperstrip - 1) // rowsperstrip 

1430 numstrips *= shape[1] 

1431 stripbytecounts = [0] * numstrips 

1432 addtag(tagbytecounts, offsetformat, numstrips, [0] * numstrips) 

1433 addtag(tag_offsets, offsetformat, numstrips, [0] * numstrips) 

1434 

1435 if data is None and not contiguous: 

1436 raise ValueError("cannot write non-contiguous empty file") 

1437 

1438 # add extra tags from user 

1439 for t in extratags: 

1440 addtag(*t) 

1441 

1442 # TODO: check TIFFReadDirectoryCheckOrder warning in files containing 

1443 # multiple tags of same code 

1444 # the entries in an IFD must be sorted in ascending order by tag code 

1445 tags = sorted(tags, key=lambda x: x[0]) 

1446 

1447 if not (self._bigtiff or self._imagej) and (fh.tell() + datasize > 2**31 - 1): 

1448 raise ValueError("data too large for standard TIFF file") 

1449 

1450 # if not compressed or multi-tiled, write the first IFD and then 

1451 # all data contiguously; else, write all IFDs and data interleaved 

1452 for pageindex in range(1 if contiguous else shape[0]): 

1453 # update pointer at ifd_offset 

1454 pos = fh.tell() 

1455 if pos % 2: 

1456 # location of IFD must begin on a word boundary 

1457 fh.write(b"\0") 

1458 pos += 1 

1459 fh.seek(self._ifdoffset) 

1460 fh.write(pack(offsetformat, pos)) 

1461 fh.seek(pos) 

1462 

1463 # write ifdentries 

1464 fh.write(pack(tagnoformat, len(tags))) 

1465 tag_offset = fh.tell() 

1466 fh.write(b"".join(t[1] for t in tags)) 

1467 self._ifdoffset = fh.tell() 

1468 fh.write(pack(offsetformat, 0)) # offset to next IFD 

1469 

1470 # write tag values and patch offsets in ifdentries, if necessary 

1471 for tagindex, tag in enumerate(tags): 

1472 if tag[2]: 

1473 pos = fh.tell() 

1474 if pos % 2: 

1475 # tag value is expected to begin on word boundary 

1476 fh.write(b"\0") 

1477 pos += 1 

1478 fh.seek(tag_offset + tagindex * tagsize + offsetsize + 4) 

1479 fh.write(pack(offsetformat, pos)) 

1480 fh.seek(pos) 

1481 if tag[0] == tag_offsets: 

1482 stripoffsetsoffset = pos 

1483 elif tag[0] == tagbytecounts: 

1484 strip_bytecounts_offset = pos 

1485 elif tag[0] == 270 and tag[2].endswith(b"\0\0\0\0"): 

1486 # image description buffer 

1487 self._descriptionoffset = pos 

1488 self._descriptionlenoffset = tag_offset + tagindex * tagsize + 4 

1489 fh.write(tag[2]) 

1490 

1491 # write image data 

1492 data_offset = fh.tell() 

1493 skip = align - data_offset % align 

1494 fh.seek(skip, 1) 

1495 data_offset += skip 

1496 if contiguous: 

1497 if data is None: 

1498 fh.write_empty(datasize) 

1499 else: 

1500 fh.write_array(data) 

1501 elif tile: 

1502 if data is None: 

1503 fh.write_empty(numtiles * stripbytecounts[0]) 

1504 else: 

1505 stripindex = 0 

1506 for plane in data[pageindex]: 

1507 for tz in range(tiles[0]): 

1508 for ty in range(tiles[1]): 

1509 for tx in range(tiles[2]): 

1510 c0 = min(tile[0], shape[2] - tz * tile[0]) 

1511 c1 = min(tile[1], shape[3] - ty * tile[1]) 

1512 c2 = min(tile[2], shape[4] - tx * tile[2]) 

1513 chunk[c0:, c1:, c2:] = 0 

1514 chunk[:c0, :c1, :c2] = plane[ 

1515 tz * tile[0] : tz * tile[0] + c0, 

1516 ty * tile[1] : ty * tile[1] + c1, 

1517 tx * tile[2] : tx * tile[2] + c2, 

1518 ] 

1519 if compress: 

1520 t = compress(chunk) 

1521 fh.write(t) 

1522 stripbytecounts[stripindex] = len(t) 

1523 stripindex += 1 

1524 else: 

1525 fh.write_array(chunk) 

1526 fh.flush() 

1527 elif compress: 

1528 # write one strip per rowsperstrip 

1529 assert data.shape[2] == 1 # not handling depth 

1530 numstrips = (shape[-3] + rowsperstrip - 1) // rowsperstrip 

1531 stripindex = 0 

1532 for plane in data[pageindex]: 

1533 for i in range(numstrips): 

1534 strip = plane[0, i * rowsperstrip : (i + 1) * rowsperstrip] 

1535 strip = compress(strip) 

1536 fh.write(strip) 

1537 stripbytecounts[stripindex] = len(strip) 

1538 stripindex += 1 

1539 

1540 # update strip/tile offsets and bytecounts if necessary 

1541 pos = fh.tell() 

1542 for tagindex, tag in enumerate(tags): 

1543 if tag[0] == tag_offsets: # strip/tile offsets 

1544 if tag[2]: 

1545 fh.seek(stripoffsetsoffset) 

1546 strip_offset = data_offset 

1547 for size in stripbytecounts: 

1548 fh.write(pack(offsetformat, strip_offset)) 

1549 strip_offset += size 

1550 else: 

1551 fh.seek(tag_offset + tagindex * tagsize + offsetsize + 4) 

1552 fh.write(pack(offsetformat, data_offset)) 

1553 elif tag[0] == tagbytecounts: # strip/tile bytecounts 

1554 if compress: 

1555 if tag[2]: 

1556 fh.seek(strip_bytecounts_offset) 

1557 for size in stripbytecounts: 

1558 fh.write(pack(offsetformat, size)) 

1559 else: 

1560 fh.seek(tag_offset + tagindex * tagsize + offsetsize + 4) 

1561 fh.write(pack(offsetformat, stripbytecounts[0])) 

1562 break 

1563 fh.seek(pos) 

1564 fh.flush() 

1565 

1566 # remove tags that should be written only once 

1567 if pageindex == 0: 

1568 tags = [tag for tag in tags if not tag[-1]] 

1569 

1570 self._shape = shape 

1571 self._datashape = (1,) + input_shape 

1572 self._datadtype = datadtype 

1573 self._dataoffset = data_offset 

1574 self._databytecounts = stripbytecounts 

1575 

1576 if contiguous: 

1577 # write remaining IFDs/tags later 

1578 self._tags = tags 

1579 # return offset and size of image data 

1580 if returnoffset: 

1581 return data_offset, sum(stripbytecounts) 

1582 

1583 def _write_remaining_pages(self): 

1584 """Write outstanding IFDs and tags to file.""" 

1585 if not self._tags or self._truncate: 

1586 return 

1587 

1588 fh = self._fh 

1589 fhpos = fh.tell() 

1590 if fhpos % 2: 

1591 fh.write(b"\0") 

1592 fhpos += 1 

1593 byteorder = self._byteorder 

1594 offsetformat = self._offsetformat 

1595 offsetsize = self._offsetsize 

1596 tagnoformat = self._tagnoformat 

1597 tagsize = self._tagsize 

1598 dataoffset = self._dataoffset 

1599 pagedatasize = sum(self._databytecounts) 

1600 pageno = self._shape[0] * self._datashape[0] - 1 

1601 

1602 def pack(fmt, *val): 

1603 return struct.pack(byteorder + fmt, *val) 

1604 

1605 # construct template IFD in memory 

1606 # need to patch offsets to next IFD and data before writing to disk 

1607 ifd = io.BytesIO() 

1608 ifd.write(pack(tagnoformat, len(self._tags))) 

1609 tagoffset = ifd.tell() 

1610 ifd.write(b"".join(t[1] for t in self._tags)) 

1611 ifdoffset = ifd.tell() 

1612 ifd.write(pack(offsetformat, 0)) # offset to next IFD 

1613 # tag values 

1614 for tagindex, tag in enumerate(self._tags): 

1615 offset2value = tagoffset + tagindex * tagsize + offsetsize + 4 

1616 if tag[2]: 

1617 pos = ifd.tell() 

1618 if pos % 2: # tag value is expected to begin on word boundary 

1619 ifd.write(b"\0") 

1620 pos += 1 

1621 ifd.seek(offset2value) 

1622 try: 

1623 ifd.write(pack(offsetformat, pos + fhpos)) 

1624 except Exception: # struct.error 

1625 if self._imagej: 

1626 warnings.warn("truncating ImageJ file") 

1627 self._truncate = True 

1628 return 

1629 raise ValueError("data too large for non-BigTIFF file") 

1630 ifd.seek(pos) 

1631 ifd.write(tag[2]) 

1632 if tag[0] == self._tagoffsets: 

1633 # save strip/tile offsets for later updates 

1634 stripoffset2offset = offset2value 

1635 stripoffset2value = pos 

1636 elif tag[0] == self._tagoffsets: 

1637 # save strip/tile offsets for later updates 

1638 stripoffset2offset = None 

1639 stripoffset2value = offset2value 

1640 # size to word boundary 

1641 if ifd.tell() % 2: 

1642 ifd.write(b"\0") 

1643 

1644 # check if all IFDs fit in file 

1645 pos = fh.tell() 

1646 if not self._bigtiff and pos + ifd.tell() * pageno > 2**32 - 256: 

1647 if self._imagej: 

1648 warnings.warn("truncating ImageJ file") 

1649 self._truncate = True 

1650 return 

1651 raise ValueError("data too large for non-BigTIFF file") 

1652 

1653 # TODO: assemble IFD chain in memory 

1654 for _ in range(pageno): 

1655 # update pointer at IFD offset 

1656 pos = fh.tell() 

1657 fh.seek(self._ifdoffset) 

1658 fh.write(pack(offsetformat, pos)) 

1659 fh.seek(pos) 

1660 self._ifdoffset = pos + ifdoffset 

1661 # update strip/tile offsets in IFD 

1662 dataoffset += pagedatasize # offset to image data 

1663 if stripoffset2offset is None: 

1664 ifd.seek(stripoffset2value) 

1665 ifd.write(pack(offsetformat, dataoffset)) 

1666 else: 

1667 ifd.seek(stripoffset2offset) 

1668 ifd.write(pack(offsetformat, pos + stripoffset2value)) 

1669 ifd.seek(stripoffset2value) 

1670 stripoffset = dataoffset 

1671 for size in self._databytecounts: 

1672 ifd.write(pack(offsetformat, stripoffset)) 

1673 stripoffset += size 

1674 # write IFD entry 

1675 fh.write(ifd.getvalue()) 

1676 

1677 self._tags = None 

1678 self._datadtype = None 

1679 self._dataoffset = None 

1680 self._databytecounts = None 

1681 # do not reset _shape or _data_shape 

1682 

1683 def _write_image_description(self): 

1684 """Write meta data to ImageDescription tag.""" 

1685 if ( 

1686 not self._datashape 

1687 or self._datashape[0] == 1 

1688 or self._descriptionoffset <= 0 

1689 ): 

1690 return 

1691 

1692 colormapped = self._colormap is not None 

1693 if self._imagej: 

1694 isrgb = self._shape[-1] in (3, 4) 

1695 description = imagej_description( 

1696 self._datashape, isrgb, colormapped, **self._metadata 

1697 ) 

1698 else: 

1699 description = json_description(self._datashape, **self._metadata) 

1700 

1701 # rewrite description and its length to file 

1702 description = description.encode("utf-8") 

1703 description = description[: self._descriptionlen - 1] 

1704 pos = self._fh.tell() 

1705 self._fh.seek(self._descriptionoffset) 

1706 self._fh.write(description) 

1707 self._fh.seek(self._descriptionlenoffset) 

1708 self._fh.write( 

1709 struct.pack(self._byteorder + self._offsetformat, len(description) + 1) 

1710 ) 

1711 self._fh.seek(pos) 

1712 

1713 self._descriptionoffset = 0 

1714 self._descriptionlenoffset = 0 

1715 self._descriptionlen = 0 

1716 

1717 def _now(self): 

1718 """Return current date and time.""" 

1719 return datetime.datetime.now() 

1720 

1721 def close(self): 

1722 """Write remaining pages and close file handle.""" 

1723 if not self._truncate: 

1724 self._write_remaining_pages() 

1725 self._write_image_description() 

1726 self._fh.close() 

1727 

1728 def __enter__(self): 

1729 return self 

1730 

1731 def __exit__(self, exc_type, exc_value, traceback): 

1732 self.close() 

1733 

1734 

1735class TiffFile(object): 

1736 """Read image and metadata from TIFF file. 

1737 

1738 TiffFile instances must be closed using the 'close' method, which is 

1739 automatically called when using the 'with' context manager. 

1740 

1741 Attributes 

1742 ---------- 

1743 pages : TiffPages 

1744 Sequence of TIFF pages in file. 

1745 series : list of TiffPageSeries 

1746 Sequences of closely related TIFF pages. These are computed 

1747 from OME, LSM, ImageJ, etc. metadata or based on similarity 

1748 of page properties such as shape, dtype, and compression. 

1749 byteorder : '>', '<' 

1750 The endianness of data in the file. 

1751 '>': big-endian (Motorola). 

1752 '>': little-endian (Intel). 

1753 is_flag : bool 

1754 If True, file is of a certain format. 

1755 Flags are: bigtiff, movie, shaped, ome, imagej, stk, lsm, fluoview, 

1756 nih, vista, 'micromanager, metaseries, mdgel, mediacy, tvips, fei, 

1757 sem, scn, svs, scanimage, andor, epics, pilatus, qptiff. 

1758 

1759 All attributes are read-only. 

1760 

1761 Examples 

1762 -------- 

1763 >>> # read image array from TIFF file 

1764 >>> imsave('temp.tif', numpy.random.rand(5, 301, 219)) 

1765 >>> with TiffFile('temp.tif') as tif: 

1766 ... data = tif.asarray() 

1767 >>> data.shape 

1768 (5, 301, 219) 

1769 

1770 """ 

1771 

1772 def __init__( 

1773 self, 

1774 arg, 

1775 name=None, 

1776 offset=None, 

1777 size=None, 

1778 multifile=True, 

1779 movie=None, 

1780 **kwargs 

1781 ): 

1782 """Initialize instance from file. 

1783 

1784 Parameters 

1785 ---------- 

1786 arg : str or open file 

1787 Name of file or open file object. 

1788 The file objects are closed in TiffFile.close(). 

1789 name : str 

1790 Optional name of file in case 'arg' is a file handle. 

1791 offset : int 

1792 Optional start position of embedded file. By default, this is 

1793 the current file position. 

1794 size : int 

1795 Optional size of embedded file. By default, this is the number 

1796 of bytes from the 'offset' to the end of the file. 

1797 multifile : bool 

1798 If True (default), series may include pages from multiple files. 

1799 Currently applies to OME-TIFF only. 

1800 movie : bool 

1801 If True, assume that later pages differ from first page only by 

1802 data offsets and byte counts. Significantly increases speed and 

1803 reduces memory usage when reading movies with thousands of pages. 

1804 Enabling this for non-movie files will result in data corruption 

1805 or crashes. Python 3 only. 

1806 kwargs : bool 

1807 'is_ome': If False, disable processing of OME-XML metadata. 

1808 

1809 """ 

1810 if "fastij" in kwargs: 

1811 del kwargs["fastij"] 

1812 raise DeprecationWarning("the fastij option will be removed") 

1813 for key, value in kwargs.items(): 

1814 if key[:3] == "is_" and key[3:] in TIFF.FILE_FLAGS: 

1815 if value is not None and not value: 

1816 setattr(self, key, bool(value)) 

1817 else: 

1818 raise TypeError("unexpected keyword argument: %s" % key) 

1819 

1820 fh = FileHandle(arg, mode="rb", name=name, offset=offset, size=size) 

1821 self._fh = fh 

1822 self._multifile = bool(multifile) 

1823 self._files = {fh.name: self} # cache of TiffFiles 

1824 try: 

1825 fh.seek(0) 

1826 try: 

1827 byteorder = {b"II": "<", b"MM": ">"}[fh.read(2)] 

1828 except KeyError: 

1829 raise ValueError("not a TIFF file") 

1830 sys_byteorder = {"big": ">", "little": "<"}[sys.byteorder] 

1831 self.isnative = byteorder == sys_byteorder 

1832 

1833 version = struct.unpack(byteorder + "H", fh.read(2))[0] 

1834 if version == 43: 

1835 # BigTiff 

1836 self.is_bigtiff = True 

1837 offsetsize, zero = struct.unpack(byteorder + "HH", fh.read(4)) 

1838 if zero or offsetsize != 8: 

1839 raise ValueError("invalid BigTIFF file") 

1840 self.byteorder = byteorder 

1841 self.offsetsize = 8 

1842 self.offsetformat = byteorder + "Q" 

1843 self.tagnosize = 8 

1844 self.tagnoformat = byteorder + "Q" 

1845 self.tagsize = 20 

1846 self.tagformat1 = byteorder + "HH" 

1847 self.tagformat2 = byteorder + "Q8s" 

1848 elif version == 42: 

1849 self.is_bigtiff = False 

1850 self.byteorder = byteorder 

1851 self.offsetsize = 4 

1852 self.offsetformat = byteorder + "I" 

1853 self.tagnosize = 2 

1854 self.tagnoformat = byteorder + "H" 

1855 self.tagsize = 12 

1856 self.tagformat1 = byteorder + "HH" 

1857 self.tagformat2 = byteorder + "I4s" 

1858 else: 

1859 raise ValueError("invalid TIFF file") 

1860 

1861 # file handle is at offset to offset to first page 

1862 self.pages = TiffPages(self) 

1863 

1864 if self.is_lsm and ( 

1865 self.filehandle.size >= 2**32 

1866 or self.pages[0].compression != 1 

1867 or self.pages[1].compression != 1 

1868 ): 

1869 self._lsm_load_pages() 

1870 self._lsm_fix_strip_offsets() 

1871 self._lsm_fix_strip_bytecounts() 

1872 elif movie: 

1873 self.pages.useframes = True 

1874 

1875 except Exception: 

1876 fh.close() 

1877 raise 

1878 

1879 @property 

1880 def filehandle(self): 

1881 """Return file handle.""" 

1882 return self._fh 

1883 

1884 @property 

1885 def filename(self): 

1886 """Return name of file handle.""" 

1887 return self._fh.name 

1888 

1889 @lazyattr 

1890 def fstat(self): 

1891 """Return status of file handle as stat_result object.""" 

1892 try: 

1893 return os.fstat(self._fh.fileno()) 

1894 except Exception: # io.UnsupportedOperation 

1895 return None 

1896 

1897 def close(self): 

1898 """Close open file handle(s).""" 

1899 for tif in self._files.values(): 

1900 tif.filehandle.close() 

1901 self._files = {} 

1902 

1903 def asarray(self, key=None, series=None, out=None, validate=True, maxworkers=1): 

1904 """Return image data from multiple TIFF pages as numpy array. 

1905 

1906 By default, the data from the first series is returned. 

1907 

1908 Parameters 

1909 ---------- 

1910 key : int, slice, or sequence of page indices 

1911 Defines which pages to return as array. 

1912 series : int or TiffPageSeries 

1913 Defines which series of pages to return as array. 

1914 out : numpy.ndarray, str, or file-like object; optional 

1915 Buffer where image data will be saved. 

1916 If None (default), a new array will be created. 

1917 If numpy.ndarray, a writable array of compatible dtype and shape. 

1918 If 'memmap', directly memory-map the image data in the TIFF file 

1919 if possible; else create a memory-mapped array in a temporary file. 

1920 If str or open file, the file name or file object used to 

1921 create a memory-map to an array stored in a binary file on disk. 

1922 validate : bool 

1923 If True (default), validate various tags. 

1924 Passed to TiffPage.asarray(). 

1925 maxworkers : int 

1926 Maximum number of threads to concurrently get data from pages. 

1927 Default is 1. If None, up to half the CPU cores are used. 

1928 Reading data from file is limited to a single thread. 

1929 Using multiple threads can significantly speed up this function 

1930 if the bottleneck is decoding compressed data, e.g. in case of 

1931 large LZW compressed LSM files. 

1932 If the bottleneck is I/O or pure Python code, using multiple 

1933 threads might be detrimental. 

1934 

1935 """ 

1936 if not self.pages: 

1937 return numpy.array([]) 

1938 if key is None and series is None: 

1939 series = 0 

1940 if series is not None: 

1941 try: 

1942 series = self.series[series] 

1943 except (KeyError, TypeError): 

1944 pass 

1945 pages = series._pages 

1946 else: 

1947 pages = self.pages 

1948 

1949 if key is None: 

1950 pass 

1951 elif isinstance(key, inttypes): 

1952 pages = [pages[key]] 

1953 elif isinstance(key, slice): 

1954 pages = pages[key] 

1955 elif isinstance(key, collections.Iterable): 

1956 pages = [pages[k] for k in key] 

1957 else: 

1958 raise TypeError("key must be an int, slice, or sequence") 

1959 

1960 if not pages: 

1961 raise ValueError("no pages selected") 

1962 

1963 if self.is_nih: 

1964 result = stack_pages(pages, out=out, maxworkers=maxworkers, squeeze=False) 

1965 elif key is None and series and series.offset: 

1966 typecode = self.byteorder + series.dtype.char 

1967 if out == "memmap" and pages[0].is_memmappable: 

1968 result = self.filehandle.memmap_array( 

1969 typecode, series.shape, series.offset 

1970 ) 

1971 else: 

1972 if out is not None: 

1973 out = create_output(out, series.shape, series.dtype) 

1974 self.filehandle.seek(series.offset) 

1975 result = self.filehandle.read_array( 

1976 typecode, product(series.shape), out=out, native=True 

1977 ) 

1978 elif len(pages) == 1: 

1979 result = pages[0].asarray(out=out, validate=validate) 

1980 else: 

1981 result = stack_pages(pages, out=out, maxworkers=maxworkers) 

1982 

1983 if result is None: 

1984 return 

1985 

1986 if key is None: 

1987 try: 

1988 result.shape = series.shape 

1989 except ValueError: 

1990 try: 

1991 warnings.warn( 

1992 "failed to reshape %s to %s" % (result.shape, series.shape) 

1993 ) 

1994 # try series of expected shapes 

1995 result.shape = (-1,) + series.shape 

1996 except ValueError: 

1997 # revert to generic shape 

1998 result.shape = (-1,) + pages[0].shape 

1999 elif len(pages) == 1: 

2000 result.shape = pages[0].shape 

2001 else: 

2002 result.shape = (-1,) + pages[0].shape 

2003 return result 

2004 

2005 @lazyattr 

2006 def series(self): 

2007 """Return related pages as TiffPageSeries. 

2008 

2009 Side effect: after calling this function, TiffFile.pages might contain 

2010 TiffPage and TiffFrame instances. 

2011 

2012 """ 

2013 if not self.pages: 

2014 return [] 

2015 

2016 useframes = self.pages.useframes 

2017 keyframe = self.pages.keyframe 

2018 series = [] 

2019 for name in "ome imagej lsm fluoview nih mdgel shaped".split(): 

2020 if getattr(self, "is_" + name, False): 

2021 series = getattr(self, "_%s_series" % name)() 

2022 break 

2023 self.pages.useframes = useframes 

2024 self.pages.keyframe = keyframe 

2025 if not series: 

2026 series = self._generic_series() 

2027 

2028 # remove empty series, e.g. in MD Gel files 

2029 series = [s for s in series if sum(s.shape) > 0] 

2030 

2031 for i, s in enumerate(series): 

2032 s.index = i 

2033 return series 

2034 

2035 def _generic_series(self): 

2036 """Return image series in file.""" 

2037 if self.pages.useframes: 

2038 # movie mode 

2039 page = self.pages[0] 

2040 shape = page.shape 

2041 axes = page.axes 

2042 if len(self.pages) > 1: 

2043 shape = (len(self.pages),) + shape 

2044 axes = "I" + axes 

2045 return [ 

2046 TiffPageSeries(self.pages[:], shape, page.dtype, axes, stype="movie") 

2047 ] 

2048 

2049 self.pages.clear(False) 

2050 self.pages.load() 

2051 result = [] 

2052 keys = [] 

2053 series = {} 

2054 compressions = TIFF.DECOMPESSORS 

2055 for page in self.pages: 

2056 if not page.shape: 

2057 continue 

2058 key = page.shape + (page.axes, page.compression in compressions) 

2059 if key in series: 

2060 series[key].append(page) 

2061 else: 

2062 keys.append(key) 

2063 series[key] = [page] 

2064 for key in keys: 

2065 pages = series[key] 

2066 page = pages[0] 

2067 shape = page.shape 

2068 axes = page.axes 

2069 if len(pages) > 1: 

2070 shape = (len(pages),) + shape 

2071 axes = "I" + axes 

2072 result.append( 

2073 TiffPageSeries(pages, shape, page.dtype, axes, stype="Generic") 

2074 ) 

2075 

2076 return result 

2077 

2078 def _shaped_series(self): 

2079 """Return image series in "shaped" file.""" 

2080 pages = self.pages 

2081 pages.useframes = True 

2082 lenpages = len(pages) 

2083 

2084 def append_series(series, pages, axes, shape, reshape, name, truncated): 

2085 page = pages[0] 

2086 if not axes: 

2087 shape = page.shape 

2088 axes = page.axes 

2089 if len(pages) > 1: 

2090 shape = (len(pages),) + shape 

2091 axes = "Q" + axes 

2092 size = product(shape) 

2093 resize = product(reshape) 

2094 if page.is_contiguous and resize > size and resize % size == 0: 

2095 if truncated is None: 

2096 truncated = True 

2097 axes = "Q" + axes 

2098 shape = (resize // size,) + shape 

2099 try: 

2100 axes = reshape_axes(axes, shape, reshape) 

2101 shape = reshape 

2102 except ValueError as e: 

2103 warnings.warn(str(e)) 

2104 series.append( 

2105 TiffPageSeries( 

2106 pages, 

2107 shape, 

2108 page.dtype, 

2109 axes, 

2110 name=name, 

2111 stype="Shaped", 

2112 truncated=truncated, 

2113 ) 

2114 ) 

2115 

2116 keyframe = axes = shape = reshape = name = None 

2117 series = [] 

2118 index = 0 

2119 while True: 

2120 if index >= lenpages: 

2121 break 

2122 # new keyframe; start of new series 

2123 pages.keyframe = index 

2124 keyframe = pages[index] 

2125 if not keyframe.is_shaped: 

2126 warnings.warn("invalid shape metadata or corrupted file") 

2127 return 

2128 # read metadata 

2129 axes = None 

2130 shape = None 

2131 metadata = json_description_metadata(keyframe.is_shaped) 

2132 name = metadata.get("name", "") 

2133 reshape = metadata["shape"] 

2134 truncated = metadata.get("truncated", None) 

2135 if "axes" in metadata: 

2136 axes = metadata["axes"] 

2137 if len(axes) == len(reshape): 

2138 shape = reshape 

2139 else: 

2140 axes = "" 

2141 warnings.warn("axes do not match shape") 

2142 # skip pages if possible 

2143 spages = [keyframe] 

2144 size = product(reshape) 

2145 npages, mod = divmod(size, product(keyframe.shape)) 

2146 if mod: 

2147 warnings.warn("series shape does not match page shape") 

2148 return 

2149 if 1 < npages <= lenpages - index: 

2150 size *= keyframe._dtype.itemsize 

2151 if truncated: 

2152 npages = 1 

2153 elif ( 

2154 keyframe.is_final 

2155 and keyframe.offset + size < pages[index + 1].offset 

2156 ): 

2157 truncated = False 

2158 else: 

2159 # need to read all pages for series 

2160 truncated = False 

2161 for j in range(index + 1, index + npages): 

2162 page = pages[j] 

2163 page.keyframe = keyframe 

2164 spages.append(page) 

2165 append_series(series, spages, axes, shape, reshape, name, truncated) 

2166 index += npages 

2167 

2168 return series 

2169 

2170 def _imagej_series(self): 

2171 """Return image series in ImageJ file.""" 

2172 # ImageJ's dimension order is always TZCYXS 

2173 # TODO: fix loading of color, composite, or palette images 

2174 self.pages.useframes = True 

2175 self.pages.keyframe = 0 

2176 

2177 ij = self.imagej_metadata 

2178 pages = self.pages 

2179 page = pages[0] 

2180 

2181 def is_hyperstack(): 

2182 # ImageJ hyperstack store all image metadata in the first page and 

2183 # image data are stored contiguously before the second page, if any 

2184 if not page.is_final: 

2185 return False 

2186 images = ij.get("images", 0) 

2187 if images <= 1: 

2188 return False 

2189 offset, count = page.is_contiguous 

2190 if ( 

2191 count != product(page.shape) * page.bitspersample // 8 

2192 or offset + count * images > self.filehandle.size 

2193 ): 

2194 raise ValueError() 

2195 # check that next page is stored after data 

2196 if len(pages) > 1 and offset + count * images > pages[1].offset: 

2197 return False 

2198 return True 

2199 

2200 try: 

2201 hyperstack = is_hyperstack() 

2202 except ValueError: 

2203 warnings.warn("invalid ImageJ metadata or corrupted file") 

2204 return 

2205 if hyperstack: 

2206 # no need to read other pages 

2207 pages = [page] 

2208 else: 

2209 self.pages.load() 

2210 

2211 shape = [] 

2212 axes = [] 

2213 if "frames" in ij: 

2214 shape.append(ij["frames"]) 

2215 axes.append("T") 

2216 if "slices" in ij: 

2217 shape.append(ij["slices"]) 

2218 axes.append("Z") 

2219 if "channels" in ij and not ( 

2220 page.photometric == 2 and not ij.get("hyperstack", False) 

2221 ): 

2222 shape.append(ij["channels"]) 

2223 axes.append("C") 

2224 remain = ij.get("images", len(pages)) // (product(shape) if shape else 1) 

2225 if remain > 1: 

2226 shape.append(remain) 

2227 axes.append("I") 

2228 if page.axes[0] == "I": 

2229 # contiguous multiple images 

2230 shape.extend(page.shape[1:]) 

2231 axes.extend(page.axes[1:]) 

2232 elif page.axes[:2] == "SI": 

2233 # color-mapped contiguous multiple images 

2234 shape = page.shape[0:1] + tuple(shape) + page.shape[2:] 

2235 axes = list(page.axes[0]) + axes + list(page.axes[2:]) 

2236 else: 

2237 shape.extend(page.shape) 

2238 axes.extend(page.axes) 

2239 

2240 truncated = ( 

2241 hyperstack 

2242 and len(self.pages) == 1 

2243 and page.is_contiguous[1] != product(shape) * page.bitspersample // 8 

2244 ) 

2245 

2246 return [ 

2247 TiffPageSeries( 

2248 pages, shape, page.dtype, axes, stype="ImageJ", truncated=truncated 

2249 ) 

2250 ] 

2251 

2252 def _fluoview_series(self): 

2253 """Return image series in FluoView file.""" 

2254 self.pages.useframes = True 

2255 self.pages.keyframe = 0 

2256 self.pages.load() 

2257 mm = self.fluoview_metadata 

2258 mmhd = list(reversed(mm["Dimensions"])) 

2259 axes = "".join( 

2260 TIFF.MM_DIMENSIONS.get(i[0].upper(), "Q") for i in mmhd if i[1] > 1 

2261 ) 

2262 shape = tuple(int(i[1]) for i in mmhd if i[1] > 1) 

2263 return [ 

2264 TiffPageSeries( 

2265 self.pages, 

2266 shape, 

2267 self.pages[0].dtype, 

2268 axes, 

2269 name=mm["ImageName"], 

2270 stype="FluoView", 

2271 ) 

2272 ] 

2273 

2274 def _mdgel_series(self): 

2275 """Return image series in MD Gel file.""" 

2276 # only a single page, scaled according to metadata in second page 

2277 self.pages.useframes = False 

2278 self.pages.keyframe = 0 

2279 self.pages.load() 

2280 md = self.mdgel_metadata 

2281 if md["FileTag"] in (2, 128): 

2282 dtype = numpy.dtype("float32") 

2283 scale = md["ScalePixel"] 

2284 scale = scale[0] / scale[1] # rational 

2285 if md["FileTag"] == 2: 

2286 # squary root data format 

2287 def transform(a): 

2288 return a.astype("float32") ** 2 * scale 

2289 

2290 else: 

2291 

2292 def transform(a): 

2293 return a.astype("float32") * scale 

2294 

2295 else: 

2296 transform = None 

2297 page = self.pages[0] 

2298 return [ 

2299 TiffPageSeries( 

2300 [page], page.shape, dtype, page.axes, transform=transform, stype="MDGel" 

2301 ) 

2302 ] 

2303 

2304 def _nih_series(self): 

2305 """Return image series in NIH file.""" 

2306 self.pages.useframes = True 

2307 self.pages.keyframe = 0 

2308 self.pages.load() 

2309 page0 = self.pages[0] 

2310 if len(self.pages) == 1: 

2311 shape = page0.shape 

2312 axes = page0.axes 

2313 else: 

2314 shape = (len(self.pages),) + page0.shape 

2315 axes = "I" + page0.axes 

2316 return [TiffPageSeries(self.pages, shape, page0.dtype, axes, stype="NIH")] 

2317 

2318 def _ome_series(self): 

2319 """Return image series in OME-TIFF file(s).""" 

2320 from xml.etree import cElementTree as etree # delayed import 

2321 

2322 omexml = self.pages[0].description 

2323 try: 

2324 root = etree.fromstring(omexml) 

2325 except etree.ParseError as e: 

2326 # TODO: test badly encoded OME-XML 

2327 warnings.warn("ome-xml: %s" % e) 

2328 try: 

2329 # might work on Python 2 

2330 omexml = omexml.decode("utf-8", "ignore").encode("utf-8") 

2331 root = etree.fromstring(omexml) 

2332 except Exception: 

2333 return 

2334 

2335 self.pages.useframes = True 

2336 self.pages.keyframe = 0 

2337 self.pages.load() 

2338 

2339 uuid = root.attrib.get("UUID", None) 

2340 self._files = {uuid: self} 

2341 dirname = self._fh.dirname 

2342 modulo = {} 

2343 series = [] 

2344 for element in root: 

2345 if element.tag.endswith("BinaryOnly"): 

2346 # TODO: load OME-XML from master or companion file 

2347 warnings.warn("ome-xml: not an ome-tiff master file") 

2348 break 

2349 if element.tag.endswith("StructuredAnnotations"): 

2350 for annot in element: 

2351 if not annot.attrib.get("Namespace", "").endswith("modulo"): 

2352 continue 

2353 for value in annot: 

2354 for modul in value: 

2355 for along in modul: 

2356 if not along.tag[:-1].endswith("Along"): 

2357 continue 

2358 axis = along.tag[-1] 

2359 newaxis = along.attrib.get("Type", "other") 

2360 newaxis = TIFF.AXES_LABELS[newaxis] 

2361 if "Start" in along.attrib: 

2362 step = float(along.attrib.get("Step", 1)) 

2363 start = float(along.attrib["Start"]) 

2364 stop = float(along.attrib["End"]) + step 

2365 labels = numpy.arange(start, stop, step) 

2366 else: 

2367 labels = [ 

2368 label.text 

2369 for label in along 

2370 if label.tag.endswith("Label") 

2371 ] 

2372 modulo[axis] = (newaxis, labels) 

2373 

2374 if not element.tag.endswith("Image"): 

2375 continue 

2376 

2377 attr = element.attrib 

2378 name = attr.get("Name", None) 

2379 

2380 for pixels in element: 

2381 if not pixels.tag.endswith("Pixels"): 

2382 continue 

2383 attr = pixels.attrib 

2384 dtype = attr.get("PixelType", None) 

2385 axes = "".join(reversed(attr["DimensionOrder"])) 

2386 shape = list(int(attr["Size" + ax]) for ax in axes) 

2387 size = product(shape[:-2]) 

2388 ifds = None 

2389 spp = 1 # samples per pixel 

2390 # FIXME: this implementation assumes the last two 

2391 # dimensions are stored in tiff pages (shape[:-2]). 

2392 # Apparently that is not always the case. 

2393 for data in pixels: 

2394 if data.tag.endswith("Channel"): 

2395 attr = data.attrib 

2396 if ifds is None: 

2397 spp = int(attr.get("SamplesPerPixel", spp)) 

2398 ifds = [None] * (size // spp) 

2399 elif int(attr.get("SamplesPerPixel", 1)) != spp: 

2400 raise ValueError("cannot handle differing SamplesPerPixel") 

2401 continue 

2402 if ifds is None: 

2403 ifds = [None] * (size // spp) 

2404 if not data.tag.endswith("TiffData"): 

2405 continue 

2406 attr = data.attrib 

2407 ifd = int(attr.get("IFD", 0)) 

2408 num = int(attr.get("NumPlanes", 1 if "IFD" in attr else 0)) 

2409 num = int(attr.get("PlaneCount", num)) 

2410 idx = [int(attr.get("First" + ax, 0)) for ax in axes[:-2]] 

2411 try: 

2412 idx = numpy.ravel_multi_index(idx, shape[:-2]) 

2413 except ValueError: 

2414 # ImageJ produces invalid ome-xml when cropping 

2415 warnings.warn("ome-xml: invalid TiffData index") 

2416 continue 

2417 for uuid in data: 

2418 if not uuid.tag.endswith("UUID"): 

2419 continue 

2420 if uuid.text not in self._files: 

2421 if not self._multifile: 

2422 # abort reading multifile OME series 

2423 # and fall back to generic series 

2424 return [] 

2425 fname = uuid.attrib["FileName"] 

2426 try: 

2427 tif = TiffFile(os.path.join(dirname, fname)) 

2428 tif.pages.useframes = True 

2429 tif.pages.keyframe = 0 

2430 tif.pages.load() 

2431 except (IOError, FileNotFoundError, ValueError): 

2432 warnings.warn("ome-xml: failed to read '%s'" % fname) 

2433 break 

2434 self._files[uuid.text] = tif 

2435 tif.close() 

2436 pages = self._files[uuid.text].pages 

2437 try: 

2438 for i in range(num if num else len(pages)): 

2439 ifds[idx + i] = pages[ifd + i] 

2440 except IndexError: 

2441 warnings.warn("ome-xml: index out of range") 

2442 # only process first UUID 

2443 break 

2444 else: 

2445 pages = self.pages 

2446 try: 

2447 for i in range(num if num else len(pages)): 

2448 ifds[idx + i] = pages[ifd + i] 

2449 except IndexError: 

2450 warnings.warn("ome-xml: index out of range") 

2451 

2452 if all(i is None for i in ifds): 

2453 # skip images without data 

2454 continue 

2455 

2456 # set a keyframe on all IFDs 

2457 keyframe = None 

2458 for i in ifds: 

2459 # try find a TiffPage 

2460 if i and i == i.keyframe: 

2461 keyframe = i 

2462 break 

2463 if not keyframe: 

2464 # reload a TiffPage from file 

2465 for i, keyframe in enumerate(ifds): 

2466 if keyframe: 

2467 keyframe.parent.pages.keyframe = keyframe.index 

2468 keyframe = keyframe.parent.pages[keyframe.index] 

2469 ifds[i] = keyframe 

2470 break 

2471 for i in ifds: 

2472 if i is not None: 

2473 i.keyframe = keyframe 

2474 

2475 dtype = keyframe.dtype 

2476 series.append( 

2477 TiffPageSeries( 

2478 ifds, shape, dtype, axes, parent=self, name=name, stype="OME" 

2479 ) 

2480 ) 

2481 for serie in series: 

2482 shape = list(serie.shape) 

2483 for axis, (newaxis, labels) in modulo.items(): 

2484 i = serie.axes.index(axis) 

2485 size = len(labels) 

2486 if shape[i] == size: 

2487 serie.axes = serie.axes.replace(axis, newaxis, 1) 

2488 else: 

2489 shape[i] //= size 

2490 shape.insert(i + 1, size) 

2491 serie.axes = serie.axes.replace(axis, axis + newaxis, 1) 

2492 serie.shape = tuple(shape) 

2493 # squeeze dimensions 

2494 for serie in series: 

2495 serie.shape, serie.axes = squeeze_axes(serie.shape, serie.axes) 

2496 return series 

2497 

2498 def _lsm_series(self): 

2499 """Return main image series in LSM file. Skip thumbnails.""" 

2500 lsmi = self.lsm_metadata 

2501 axes = TIFF.CZ_LSMINFO_SCANTYPE[lsmi["ScanType"]] 

2502 if self.pages[0].photometric == 2: # RGB; more than one channel 

2503 axes = axes.replace("C", "").replace("XY", "XYC") 

2504 if lsmi.get("DimensionP", 0) > 1: 

2505 axes += "P" 

2506 if lsmi.get("DimensionM", 0) > 1: 

2507 axes += "M" 

2508 axes = axes[::-1] 

2509 shape = tuple(int(lsmi[TIFF.CZ_LSMINFO_DIMENSIONS[i]]) for i in axes) 

2510 name = lsmi.get("Name", "") 

2511 self.pages.keyframe = 0 

2512 pages = self.pages[::2] 

2513 dtype = pages[0].dtype 

2514 series = [TiffPageSeries(pages, shape, dtype, axes, name=name, stype="LSM")] 

2515 

2516 if self.pages[1].is_reduced: 

2517 self.pages.keyframe = 1 

2518 pages = self.pages[1::2] 

2519 dtype = pages[0].dtype 

2520 cp, i = 1, 0 

2521 while cp < len(pages) and i < len(shape) - 2: 

2522 cp *= shape[i] 

2523 i += 1 

2524 shape = shape[:i] + pages[0].shape 

2525 axes = axes[:i] + "CYX" 

2526 series.append( 

2527 TiffPageSeries(pages, shape, dtype, axes, name=name, stype="LSMreduced") 

2528 ) 

2529 

2530 return series 

2531 

2532 def _lsm_load_pages(self): 

2533 """Load all pages from LSM file.""" 

2534 self.pages.cache = True 

2535 self.pages.useframes = True 

2536 # second series: thumbnails 

2537 self.pages.keyframe = 1 

2538 keyframe = self.pages[1] 

2539 for page in self.pages[1::2]: 

2540 page.keyframe = keyframe 

2541 # first series: data 

2542 self.pages.keyframe = 0 

2543 keyframe = self.pages[0] 

2544 for page in self.pages[::2]: 

2545 page.keyframe = keyframe 

2546 

2547 def _lsm_fix_strip_offsets(self): 

2548 """Unwrap strip offsets for LSM files greater than 4 GB. 

2549 

2550 Each series and position require separate unwrapping (undocumented). 

2551 

2552 """ 

2553 if self.filehandle.size < 2**32: 

2554 return 

2555 

2556 pages = self.pages 

2557 npages = len(pages) 

2558 series = self.series[0] 

2559 axes = series.axes 

2560 

2561 # find positions 

2562 positions = 1 

2563 for i in 0, 1: 

2564 if series.axes[i] in "PM": 

2565 positions *= series.shape[i] 

2566 

2567 # make time axis first 

2568 if positions > 1: 

2569 ntimes = 0 

2570 for i in 1, 2: 

2571 if axes[i] == "T": 

2572 ntimes = series.shape[i] 

2573 break 

2574 if ntimes: 

2575 div, mod = divmod(npages, 2 * positions * ntimes) 

2576 assert mod == 0 

2577 shape = (positions, ntimes, div, 2) 

2578 indices = numpy.arange(product(shape)).reshape(shape) 

2579 indices = numpy.moveaxis(indices, 1, 0) 

2580 else: 

2581 indices = numpy.arange(npages).reshape(-1, 2) 

2582 

2583 # images of reduced page might be stored first 

2584 if pages[0].dataoffsets[0] > pages[1].dataoffsets[0]: 

2585 indices = indices[..., ::-1] 

2586 

2587 # unwrap offsets 

2588 wrap = 0 

2589 previousoffset = 0 

2590 for i in indices.flat: 

2591 page = pages[i] 

2592 dataoffsets = [] 

2593 for currentoffset in page.dataoffsets: 

2594 if currentoffset < previousoffset: 

2595 wrap += 2**32 

2596 dataoffsets.append(currentoffset + wrap) 

2597 previousoffset = currentoffset 

2598 page.dataoffsets = tuple(dataoffsets) 

2599 

2600 def _lsm_fix_strip_bytecounts(self): 

2601 """Set databytecounts to size of compressed data. 

2602 

2603 The StripByteCounts tag in LSM files contains the number of bytes 

2604 for the uncompressed data. 

2605 

2606 """ 

2607 pages = self.pages 

2608 if pages[0].compression == 1: 

2609 return 

2610 # sort pages by first strip offset 

2611 pages = sorted(pages, key=lambda p: p.dataoffsets[0]) 

2612 npages = len(pages) - 1 

2613 for i, page in enumerate(pages): 

2614 if page.index % 2: 

2615 continue 

2616 offsets = page.dataoffsets 

2617 bytecounts = page.databytecounts 

2618 if i < npages: 

2619 lastoffset = pages[i + 1].dataoffsets[0] 

2620 else: 

2621 # LZW compressed strips might be longer than uncompressed 

2622 lastoffset = min(offsets[-1] + 2 * bytecounts[-1], self._fh.size) 

2623 offsets = offsets + (lastoffset,) 

2624 page.databytecounts = tuple( 

2625 offsets[j + 1] - offsets[j] for j in range(len(bytecounts)) 

2626 ) 

2627 

2628 def __getattr__(self, name): 

2629 """Return 'is_flag' attributes from first page.""" 

2630 if name[3:] in TIFF.FILE_FLAGS: 

2631 if not self.pages: 

2632 return False 

2633 value = bool(getattr(self.pages[0], name)) 

2634 setattr(self, name, value) 

2635 return value 

2636 raise AttributeError( 

2637 "'%s' object has no attribute '%s'" % (self.__class__.__name__, name) 

2638 ) 

2639 

2640 def __enter__(self): 

2641 return self 

2642 

2643 def __exit__(self, exc_type, exc_value, traceback): 

2644 self.close() 

2645 

2646 def __str__(self, detail=0, width=79): 

2647 """Return string containing information about file. 

2648 

2649 The detail parameter specifies the level of detail returned: 

2650 

2651 0: file only. 

2652 1: all series, first page of series and its tags. 

2653 2: large tag values and file metadata. 

2654 3: all pages. 

2655 

2656 """ 

2657 info = [ 

2658 "TiffFile '%s'", 

2659 format_size(self._fh.size), 

2660 {"<": "LittleEndian", ">": "BigEndian"}[self.byteorder], 

2661 ] 

2662 if self.is_bigtiff: 

2663 info.append("BigTiff") 

2664 info.append("|".join(f.upper() for f in self.flags)) 

2665 if len(self.pages) > 1: 

2666 info.append("%i Pages" % len(self.pages)) 

2667 if len(self.series) > 1: 

2668 info.append("%i Series" % len(self.series)) 

2669 if len(self._files) > 1: 

2670 info.append("%i Files" % (len(self._files))) 

2671 info = " ".join(info) 

2672 info = info.replace(" ", " ").replace(" ", " ") 

2673 info = info % snipstr(self._fh.name, max(12, width + 2 - len(info))) 

2674 if detail <= 0: 

2675 return info 

2676 info = [info] 

2677 info.append("\n".join(str(s) for s in self.series)) 

2678 if detail >= 3: 

2679 info.extend( 

2680 ( 

2681 TiffPage.__str__(p, detail=detail, width=width) 

2682 for p in self.pages 

2683 if p is not None 

2684 ) 

2685 ) 

2686 else: 

2687 info.extend( 

2688 ( 

2689 TiffPage.__str__(s.pages[0], detail=detail, width=width) 

2690 for s in self.series 

2691 if s.pages[0] is not None 

2692 ) 

2693 ) 

2694 if detail >= 2: 

2695 for name in sorted(self.flags): 

2696 if hasattr(self, name + "_metadata"): 

2697 m = getattr(self, name + "_metadata") 

2698 if m: 

2699 info.append( 

2700 "%s_METADATA\n%s" 

2701 % ( 

2702 name.upper(), 

2703 pformat(m, width=width, height=detail * 12), 

2704 ) 

2705 ) 

2706 return "\n\n".join(info).replace("\n\n\n", "\n\n") 

2707 

2708 @lazyattr 

2709 def flags(self): 

2710 """Return set of file flags.""" 

2711 return set( 

2712 name.lower() 

2713 for name in sorted(TIFF.FILE_FLAGS) 

2714 if getattr(self, "is_" + name) 

2715 ) 

2716 

2717 @lazyattr 

2718 def is_mdgel(self): 

2719 """File has MD Gel format.""" 

2720 try: 

2721 return self.pages[0].is_mdgel or self.pages[1].is_mdgel 

2722 except IndexError: 

2723 return False 

2724 

2725 @property 

2726 def is_movie(self): 

2727 """Return if file is a movie.""" 

2728 return self.pages.useframes 

2729 

2730 @lazyattr 

2731 def shaped_metadata(self): 

2732 """Return Tifffile metadata from JSON descriptions as dicts.""" 

2733 if not self.is_shaped: 

2734 return 

2735 return tuple( 

2736 json_description_metadata(s.pages[0].is_shaped) 

2737 for s in self.series 

2738 if s.stype.lower() == "shaped" 

2739 ) 

2740 

2741 @lazyattr 

2742 def ome_metadata(self): 

2743 """Return OME XML as dict.""" 

2744 # TODO: remove this or return XML? 

2745 if not self.is_ome: 

2746 return 

2747 return xml2dict(self.pages[0].description)["OME"] 

2748 

2749 @lazyattr 

2750 def qptiff_metadata(self): 

2751 """Return PerkinElmer-QPI-ImageDescription XML element as dict.""" 

2752 if not self.is_qptiff: 

2753 return 

2754 root = "PerkinElmer-QPI-ImageDescription" 

2755 xml = self.pages[0].description.replace(" " + root + " ", root) 

2756 return xml2dict(xml)[root] 

2757 

2758 @lazyattr 

2759 def lsm_metadata(self): 

2760 """Return LSM metadata from CZ_LSMINFO tag as dict.""" 

2761 if not self.is_lsm: 

2762 return 

2763 return self.pages[0].tags["CZ_LSMINFO"].value 

2764 

2765 @lazyattr 

2766 def stk_metadata(self): 

2767 """Return STK metadata from UIC tags as dict.""" 

2768 if not self.is_stk: 

2769 return 

2770 page = self.pages[0] 

2771 tags = page.tags 

2772 result = {} 

2773 result["NumberPlanes"] = tags["UIC2tag"].count 

2774 if page.description: 

2775 result["PlaneDescriptions"] = page.description.split("\0") 

2776 # result['plane_descriptions'] = stk_description_metadata( 

2777 # page.image_description) 

2778 if "UIC1tag" in tags: 

2779 result.update(tags["UIC1tag"].value) 

2780 if "UIC3tag" in tags: 

2781 result.update(tags["UIC3tag"].value) # wavelengths 

2782 if "UIC4tag" in tags: 

2783 result.update(tags["UIC4tag"].value) # override uic1 tags 

2784 uic2tag = tags["UIC2tag"].value 

2785 result["ZDistance"] = uic2tag["ZDistance"] 

2786 result["TimeCreated"] = uic2tag["TimeCreated"] 

2787 result["TimeModified"] = uic2tag["TimeModified"] 

2788 try: 

2789 result["DatetimeCreated"] = numpy.array( 

2790 [ 

2791 julian_datetime(*dt) 

2792 for dt in zip(uic2tag["DateCreated"], uic2tag["TimeCreated"]) 

2793 ], 

2794 dtype="datetime64[ns]", 

2795 ) 

2796 result["DatetimeModified"] = numpy.array( 

2797 [ 

2798 julian_datetime(*dt) 

2799 for dt in zip(uic2tag["DateModified"], uic2tag["TimeModified"]) 

2800 ], 

2801 dtype="datetime64[ns]", 

2802 ) 

2803 except ValueError as e: 

2804 warnings.warn("stk_metadata: %s" % e) 

2805 return result 

2806 

2807 @lazyattr 

2808 def imagej_metadata(self): 

2809 """Return consolidated ImageJ metadata as dict.""" 

2810 if not self.is_imagej: 

2811 return 

2812 page = self.pages[0] 

2813 result = imagej_description_metadata(page.is_imagej) 

2814 if "IJMetadata" in page.tags: 

2815 try: 

2816 result.update(page.tags["IJMetadata"].value) 

2817 except Exception: 

2818 pass 

2819 return result 

2820 

2821 @lazyattr 

2822 def fluoview_metadata(self): 

2823 """Return consolidated FluoView metadata as dict.""" 

2824 if not self.is_fluoview: 

2825 return 

2826 result = {} 

2827 page = self.pages[0] 

2828 result.update(page.tags["MM_Header"].value) 

2829 # TODO: read stamps from all pages 

2830 result["Stamp"] = page.tags["MM_Stamp"].value 

2831 # skip parsing image description; not reliable 

2832 # try: 

2833 # t = fluoview_description_metadata(page.image_description) 

2834 # if t is not None: 

2835 # result['ImageDescription'] = t 

2836 # except Exception as e: 

2837 # warnings.warn( 

2838 # "failed to read FluoView image description: %s" % e) 

2839 return result 

2840 

2841 @lazyattr 

2842 def nih_metadata(self): 

2843 """Return NIH Image metadata from NIHImageHeader tag as dict.""" 

2844 if not self.is_nih: 

2845 return 

2846 return self.pages[0].tags["NIHImageHeader"].value 

2847 

2848 @lazyattr 

2849 def fei_metadata(self): 

2850 """Return FEI metadata from SFEG or HELIOS tags as dict.""" 

2851 if not self.is_fei: 

2852 return 

2853 tags = self.pages[0].tags 

2854 if "FEI_SFEG" in tags: 

2855 return tags["FEI_SFEG"].value 

2856 if "FEI_HELIOS" in tags: 

2857 return tags["FEI_HELIOS"].value 

2858 

2859 @lazyattr 

2860 def sem_metadata(self): 

2861 """Return SEM metadata from CZ_SEM tag as dict.""" 

2862 if not self.is_sem: 

2863 return 

2864 return self.pages[0].tags["CZ_SEM"].value 

2865 

2866 @lazyattr 

2867 def mdgel_metadata(self): 

2868 """Return consolidated metadata from MD GEL tags as dict.""" 

2869 for page in self.pages[:2]: 

2870 if "MDFileTag" in page.tags: 

2871 tags = page.tags 

2872 break 

2873 else: 

2874 return 

2875 result = {} 

2876 for code in range(33445, 33453): 

2877 name = TIFF.TAGS[code] 

2878 if name not in tags: 

2879 continue 

2880 result[name[2:]] = tags[name].value 

2881 return result 

2882 

2883 @lazyattr 

2884 def andor_metadata(self): 

2885 """Return Andor tags as dict.""" 

2886 return self.pages[0].andor_tags 

2887 

2888 @lazyattr 

2889 def epics_metadata(self): 

2890 """Return EPICS areaDetector tags as dict.""" 

2891 return self.pages[0].epics_tags 

2892 

2893 @lazyattr 

2894 def tvips_metadata(self): 

2895 """Return TVIPS tag as dict.""" 

2896 if not self.is_tvips: 

2897 return 

2898 return self.pages[0].tags["TVIPS"].value 

2899 

2900 @lazyattr 

2901 def metaseries_metadata(self): 

2902 """Return MetaSeries metadata from image description as dict.""" 

2903 if not self.is_metaseries: 

2904 return 

2905 return metaseries_description_metadata(self.pages[0].description) 

2906 

2907 @lazyattr 

2908 def pilatus_metadata(self): 

2909 """Return Pilatus metadata from image description as dict.""" 

2910 if not self.is_pilatus: 

2911 return 

2912 return pilatus_description_metadata(self.pages[0].description) 

2913 

2914 @lazyattr 

2915 def micromanager_metadata(self): 

2916 """Return consolidated MicroManager metadata as dict.""" 

2917 if not self.is_micromanager: 

2918 return 

2919 # from file header 

2920 result = read_micromanager_metadata(self._fh) 

2921 # from tag 

2922 result.update(self.pages[0].tags["MicroManagerMetadata"].value) 

2923 return result 

2924 

2925 @lazyattr 

2926 def scanimage_metadata(self): 

2927 """Return ScanImage non-varying frame and ROI metadata as dict.""" 

2928 if not self.is_scanimage: 

2929 return 

2930 result = {} 

2931 try: 

2932 framedata, roidata = read_scanimage_metadata(self._fh) 

2933 result["FrameData"] = framedata 

2934 result.update(roidata) 

2935 except ValueError: 

2936 pass 

2937 # TODO: scanimage_artist_metadata 

2938 try: 

2939 result["Description"] = scanimage_description_metadata( 

2940 self.pages[0].description 

2941 ) 

2942 except Exception as e: 

2943 warnings.warn("scanimage_description_metadata failed: %s" % e) 

2944 return result 

2945 

2946 @property 

2947 def geotiff_metadata(self): 

2948 """Return GeoTIFF metadata from first page as dict.""" 

2949 if not self.is_geotiff: 

2950 return 

2951 return self.pages[0].geotiff_tags 

2952 

2953 

2954class TiffPages(object): 

2955 """Sequence of TIFF image file directories.""" 

2956 

2957 def __init__(self, parent): 

2958 """Initialize instance from file. Read first TiffPage from file. 

2959 

2960 The file position must be at an offset to an offset to a TiffPage. 

2961 

2962 """ 

2963 self.parent = parent 

2964 self.pages = [] # cache of TiffPages, TiffFrames, or their offsets 

2965 self.complete = False # True if offsets to all pages were read 

2966 self._tiffpage = TiffPage # class for reading tiff pages 

2967 self._keyframe = None 

2968 self._cache = True 

2969 

2970 # read offset to first page 

2971 fh = parent.filehandle 

2972 self._nextpageoffset = fh.tell() 

2973 offset = struct.unpack(parent.offsetformat, fh.read(parent.offsetsize))[0] 

2974 

2975 if offset == 0: 

2976 # warnings.warn('file contains no pages') 

2977 self.complete = True 

2978 return 

2979 if offset >= fh.size: 

2980 warnings.warn("invalid page offset (%i)" % offset) 

2981 self.complete = True 

2982 return 

2983 

2984 # always read and cache first page 

2985 fh.seek(offset) 

2986 page = TiffPage(parent, index=0) 

2987 self.pages.append(page) 

2988 self._keyframe = page 

2989 

2990 @property 

2991 def cache(self): 

2992 """Return if pages/frames are currently being cached.""" 

2993 return self._cache 

2994 

2995 @cache.setter 

2996 def cache(self, value): 

2997 """Enable or disable caching of pages/frames. Clear cache if False.""" 

2998 value = bool(value) 

2999 if self._cache and not value: 

3000 self.clear() 

3001 self._cache = value 

3002 

3003 @property 

3004 def useframes(self): 

3005 """Return if currently using TiffFrame (True) or TiffPage (False).""" 

3006 return self._tiffpage == TiffFrame and TiffFrame is not TiffPage 

3007 

3008 @useframes.setter 

3009 def useframes(self, value): 

3010 """Set to use TiffFrame (True) or TiffPage (False).""" 

3011 self._tiffpage = TiffFrame if value else TiffPage 

3012 

3013 @property 

3014 def keyframe(self): 

3015 """Return index of current keyframe.""" 

3016 return self._keyframe.index 

3017 

3018 @keyframe.setter 

3019 def keyframe(self, index): 

3020 """Set current keyframe. Load TiffPage from file if necessary.""" 

3021 if self._keyframe.index == index: 

3022 return 

3023 if self.complete or 0 <= index < len(self.pages): 

3024 page = self.pages[index] 

3025 if isinstance(page, TiffPage): 

3026 self._keyframe = page 

3027 return 

3028 elif isinstance(page, TiffFrame): 

3029 # remove existing frame 

3030 self.pages[index] = page.offset 

3031 # load TiffPage from file 

3032 useframes = self.useframes 

3033 self._tiffpage = TiffPage 

3034 self._keyframe = self[index] 

3035 self.useframes = useframes 

3036 

3037 @property 

3038 def next_page_offset(self): 

3039 """Return offset where offset to a new page can be stored.""" 

3040 if not self.complete: 

3041 self._seek(-1) 

3042 return self._nextpageoffset 

3043 

3044 def load(self): 

3045 """Read all remaining pages from file.""" 

3046 fh = self.parent.filehandle 

3047 keyframe = self._keyframe 

3048 pages = self.pages 

3049 if not self.complete: 

3050 self._seek(-1) 

3051 for i, page in enumerate(pages): 

3052 if isinstance(page, inttypes): 

3053 fh.seek(page) 

3054 page = self._tiffpage(self.parent, index=i, keyframe=keyframe) 

3055 pages[i] = page 

3056 

3057 def clear(self, fully=True): 

3058 """Delete all but first page from cache. Set keyframe to first page.""" 

3059 pages = self.pages 

3060 if not self._cache or len(pages) < 1: 

3061 return 

3062 self._keyframe = pages[0] 

3063 if fully: 

3064 # delete all but first TiffPage/TiffFrame 

3065 for i, page in enumerate(pages[1:]): 

3066 if not isinstance(page, inttypes): 

3067 pages[i + 1] = page.offset 

3068 elif TiffFrame is not TiffPage: 

3069 # delete only TiffFrames 

3070 for i, page in enumerate(pages): 

3071 if isinstance(page, TiffFrame): 

3072 pages[i] = page.offset 

3073 

3074 def _seek(self, index, maxpages=2**22): 

3075 """Seek file to offset of specified page.""" 

3076 pages = self.pages 

3077 if not pages: 

3078 return 

3079 

3080 fh = self.parent.filehandle 

3081 if fh.closed: 

3082 raise RuntimeError("FileHandle is closed") 

3083 

3084 if self.complete or 0 <= index < len(pages): 

3085 page = pages[index] 

3086 offset = page if isinstance(page, inttypes) else page.offset 

3087 fh.seek(offset) 

3088 return 

3089 

3090 offsetformat = self.parent.offsetformat 

3091 offsetsize = self.parent.offsetsize 

3092 tagnoformat = self.parent.tagnoformat 

3093 tagnosize = self.parent.tagnosize 

3094 tagsize = self.parent.tagsize 

3095 unpack = struct.unpack 

3096 

3097 page = pages[-1] 

3098 offset = page if isinstance(page, inttypes) else page.offset 

3099 

3100 while len(pages) < maxpages: 

3101 # read offsets to pages from file until index is reached 

3102 fh.seek(offset) 

3103 # skip tags 

3104 try: 

3105 tagno = unpack(tagnoformat, fh.read(tagnosize))[0] 

3106 if tagno > 4096: 

3107 raise ValueError("suspicious number of tags") 

3108 except Exception: 

3109 warnings.warn("corrupted tag list at offset %i" % offset) 

3110 del pages[-1] 

3111 self.complete = True 

3112 break 

3113 self._nextpageoffset = offset + tagnosize + tagno * tagsize 

3114 fh.seek(self._nextpageoffset) 

3115 

3116 # read offset to next page 

3117 offset = unpack(offsetformat, fh.read(offsetsize))[0] 

3118 if offset == 0: 

3119 self.complete = True 

3120 break 

3121 if offset >= fh.size: 

3122 warnings.warn("invalid page offset (%i)" % offset) 

3123 self.complete = True 

3124 break 

3125 

3126 pages.append(offset) 

3127 if 0 <= index < len(pages): 

3128 break 

3129 

3130 if index >= len(pages): 

3131 raise IndexError("list index out of range") 

3132 

3133 page = pages[index] 

3134 fh.seek(page if isinstance(page, inttypes) else page.offset) 

3135 

3136 def __bool__(self): 

3137 """Return True if file contains any pages.""" 

3138 return len(self.pages) > 0 

3139 

3140 def __len__(self): 

3141 """Return number of pages in file.""" 

3142 if not self.complete: 

3143 self._seek(-1) 

3144 return len(self.pages) 

3145 

3146 def __getitem__(self, key): 

3147 """Return specified page(s) from cache or file.""" 

3148 pages = self.pages 

3149 if not pages: 

3150 raise IndexError("list index out of range") 

3151 if key == 0: 

3152 return pages[key] 

3153 

3154 if isinstance(key, slice): 

3155 start, stop, _ = key.indices(2**31 - 1) 

3156 if not self.complete and max(stop, start) > len(pages): 

3157 self._seek(-1) 

3158 return [self[i] for i in range(*key.indices(len(pages)))] 

3159 

3160 if self.complete and key >= len(pages): 

3161 raise IndexError("list index out of range") 

3162 

3163 try: 

3164 page = pages[key] 

3165 except IndexError: 

3166 page = 0 

3167 if not isinstance(page, inttypes): 

3168 return page 

3169 

3170 self._seek(key) 

3171 page = self._tiffpage(self.parent, index=key, keyframe=self._keyframe) 

3172 if self._cache: 

3173 pages[key] = page 

3174 return page 

3175 

3176 def __iter__(self): 

3177 """Return iterator over all pages.""" 

3178 i = 0 

3179 while True: 

3180 try: 

3181 yield self[i] 

3182 i += 1 

3183 except IndexError: 

3184 break 

3185 

3186 

3187class TiffPage(object): 

3188 """TIFF image file directory (IFD). 

3189 

3190 Attributes 

3191 ---------- 

3192 index : int 

3193 Index of page in file. 

3194 dtype : numpy.dtype or None 

3195 Data type (native byte order) of the image in IFD. 

3196 shape : tuple 

3197 Dimensions of the image in IFD. 

3198 axes : str 

3199 Axes label codes: 

3200 'X' width, 'Y' height, 'S' sample, 'I' image series|page|plane, 

3201 'Z' depth, 'C' color|em-wavelength|channel, 'E' ex-wavelength|lambda, 

3202 'T' time, 'R' region|tile, 'A' angle, 'P' phase, 'H' lifetime, 

3203 'L' exposure, 'V' event, 'Q' unknown, '_' missing 

3204 tags : dict 

3205 Dictionary of tags in IFD. {tag.name: TiffTag} 

3206 colormap : numpy.ndarray 

3207 Color look up table, if exists. 

3208 

3209 All attributes are read-only. 

3210 

3211 Notes 

3212 ----- 

3213 The internal, normalized '_shape' attribute is 6 dimensional: 

3214 

3215 0 : number planes/images (stk, ij). 

3216 1 : planar samplesperpixel. 

3217 2 : imagedepth Z (sgi). 

3218 3 : imagelength Y. 

3219 4 : imagewidth X. 

3220 5 : contig samplesperpixel. 

3221 

3222 """ 

3223 

3224 # default properties; will be updated from tags 

3225 imagewidth = 0 

3226 imagelength = 0 

3227 imagedepth = 1 

3228 tilewidth = 0 

3229 tilelength = 0 

3230 tiledepth = 1 

3231 bitspersample = 1 

3232 samplesperpixel = 1 

3233 sampleformat = 1 

3234 rowsperstrip = 2**32 - 1 

3235 compression = 1 

3236 planarconfig = 1 

3237 fillorder = 1 

3238 photometric = 0 

3239 predictor = 1 

3240 extrasamples = 1 

3241 colormap = None 

3242 software = "" 

3243 description = "" 

3244 description1 = "" 

3245 

3246 def __init__(self, parent, index, keyframe=None): 

3247 """Initialize instance from file. 

3248 

3249 The file handle position must be at offset to a valid IFD. 

3250 

3251 """ 

3252 self.parent = parent 

3253 self.index = index 

3254 self.shape = () 

3255 self._shape = () 

3256 self.dtype = None 

3257 self._dtype = None 

3258 self.axes = "" 

3259 self.tags = {} 

3260 

3261 self.dataoffsets = () 

3262 self.databytecounts = () 

3263 

3264 # read TIFF IFD structure and its tags from file 

3265 fh = parent.filehandle 

3266 self.offset = fh.tell() # offset to this IFD 

3267 try: 

3268 tagno = struct.unpack(parent.tagnoformat, fh.read(parent.tagnosize))[0] 

3269 if tagno > 4096: 

3270 raise ValueError("suspicious number of tags") 

3271 except Exception: 

3272 raise ValueError("corrupted tag list at offset %i" % self.offset) 

3273 

3274 tagsize = parent.tagsize 

3275 data = fh.read(tagsize * tagno) 

3276 tags = self.tags 

3277 index = -tagsize 

3278 for _ in range(tagno): 

3279 index += tagsize 

3280 try: 

3281 tag = TiffTag(self.parent, data[index : index + tagsize]) 

3282 except TiffTag.Error as e: 

3283 warnings.warn(str(e)) 

3284 continue 

3285 tagname = tag.name 

3286 if tagname not in tags: 

3287 name = tagname 

3288 tags[name] = tag 

3289 else: 

3290 # some files contain multiple tags with same code 

3291 # e.g. MicroManager files contain two ImageDescription tags 

3292 i = 1 

3293 while True: 

3294 name = "%s%i" % (tagname, i) 

3295 if name not in tags: 

3296 tags[name] = tag 

3297 break 

3298 name = TIFF.TAG_ATTRIBUTES.get(name, "") 

3299 if name: 

3300 if name[:3] in "sof des" and not isinstance(tag.value, str): 

3301 pass # wrong string type for software, description 

3302 else: 

3303 setattr(self, name, tag.value) 

3304 

3305 if not tags: 

3306 return # found in FIBICS 

3307 

3308 # consolidate private tags; remove them from self.tags 

3309 if self.is_andor: 

3310 self.andor_tags 

3311 elif self.is_epics: 

3312 self.epics_tags 

3313 

3314 if self.is_lsm or (self.index and self.parent.is_lsm): 

3315 # correct non standard LSM bitspersample tags 

3316 self.tags["BitsPerSample"]._fix_lsm_bitspersample(self) 

3317 

3318 if self.is_vista or (self.index and self.parent.is_vista): 

3319 # ISS Vista writes wrong ImageDepth tag 

3320 self.imagedepth = 1 

3321 

3322 if self.is_stk and "UIC1tag" in tags and not tags["UIC1tag"].value: 

3323 # read UIC1tag now that plane count is known 

3324 uic1tag = tags["UIC1tag"] 

3325 fh.seek(uic1tag.valueoffset) 

3326 tags["UIC1tag"].value = read_uic1tag( 

3327 fh, 

3328 self.parent.byteorder, 

3329 uic1tag.dtype, 

3330 uic1tag.count, 

3331 None, 

3332 tags["UIC2tag"].count, 

3333 ) 

3334 

3335 if "IJMetadata" in tags: 

3336 # decode IJMetadata tag 

3337 try: 

3338 tags["IJMetadata"].value = imagej_metadata( 

3339 tags["IJMetadata"].value, 

3340 tags["IJMetadataByteCounts"].value, 

3341 self.parent.byteorder, 

3342 ) 

3343 except Exception as e: 

3344 warnings.warn(str(e)) 

3345 

3346 if "BitsPerSample" in tags: 

3347 tag = tags["BitsPerSample"] 

3348 if tag.count == 1: 

3349 self.bitspersample = tag.value 

3350 else: 

3351 # LSM might list more items than samplesperpixel 

3352 value = tag.value[: self.samplesperpixel] 

3353 if any((v - value[0] for v in value)): 

3354 self.bitspersample = value 

3355 else: 

3356 self.bitspersample = value[0] 

3357 

3358 if "SampleFormat" in tags: 

3359 tag = tags["SampleFormat"] 

3360 if tag.count == 1: 

3361 self.sampleformat = tag.value 

3362 else: 

3363 value = tag.value[: self.samplesperpixel] 

3364 if any((v - value[0] for v in value)): 

3365 self.sampleformat = value 

3366 else: 

3367 self.sampleformat = value[0] 

3368 

3369 if "ImageLength" in tags: 

3370 if "RowsPerStrip" not in tags or tags["RowsPerStrip"].count > 1: 

3371 self.rowsperstrip = self.imagelength 

3372 # self.stripsperimage = int(math.floor( 

3373 # float(self.imagelength + self.rowsperstrip - 1) / 

3374 # self.rowsperstrip)) 

3375 

3376 # determine dtype 

3377 dtype = self.sampleformat, self.bitspersample 

3378 dtype = TIFF.SAMPLE_DTYPES.get(dtype, None) 

3379 if dtype is not None: 

3380 dtype = numpy.dtype(dtype) 

3381 self.dtype = self._dtype = dtype 

3382 

3383 # determine shape of data 

3384 imagelength = self.imagelength 

3385 imagewidth = self.imagewidth 

3386 imagedepth = self.imagedepth 

3387 samplesperpixel = self.samplesperpixel 

3388 

3389 if self.is_stk: 

3390 assert self.imagedepth == 1 

3391 uictag = tags["UIC2tag"].value 

3392 planes = tags["UIC2tag"].count 

3393 if self.planarconfig == 1: 

3394 self._shape = (planes, 1, 1, imagelength, imagewidth, samplesperpixel) 

3395 if samplesperpixel == 1: 

3396 self.shape = (planes, imagelength, imagewidth) 

3397 self.axes = "YX" 

3398 else: 

3399 self.shape = (planes, imagelength, imagewidth, samplesperpixel) 

3400 self.axes = "YXS" 

3401 else: 

3402 self._shape = (planes, samplesperpixel, 1, imagelength, imagewidth, 1) 

3403 if samplesperpixel == 1: 

3404 self.shape = (planes, imagelength, imagewidth) 

3405 self.axes = "YX" 

3406 else: 

3407 self.shape = (planes, samplesperpixel, imagelength, imagewidth) 

3408 self.axes = "SYX" 

3409 # detect type of series 

3410 if planes == 1: 

3411 self.shape = self.shape[1:] 

3412 elif numpy.all(uictag["ZDistance"] != 0): 

3413 self.axes = "Z" + self.axes 

3414 elif numpy.all(numpy.diff(uictag["TimeCreated"]) != 0): 

3415 self.axes = "T" + self.axes 

3416 else: 

3417 self.axes = "I" + self.axes 

3418 elif self.photometric == 2 or samplesperpixel > 1: # PHOTOMETRIC.RGB 

3419 if self.planarconfig == 1: 

3420 self._shape = ( 

3421 1, 

3422 1, 

3423 imagedepth, 

3424 imagelength, 

3425 imagewidth, 

3426 samplesperpixel, 

3427 ) 

3428 if imagedepth == 1: 

3429 self.shape = (imagelength, imagewidth, samplesperpixel) 

3430 self.axes = "YXS" 

3431 else: 

3432 self.shape = (imagedepth, imagelength, imagewidth, samplesperpixel) 

3433 self.axes = "ZYXS" 

3434 else: 

3435 self._shape = ( 

3436 1, 

3437 samplesperpixel, 

3438 imagedepth, 

3439 imagelength, 

3440 imagewidth, 

3441 1, 

3442 ) 

3443 if imagedepth == 1: 

3444 self.shape = (samplesperpixel, imagelength, imagewidth) 

3445 self.axes = "SYX" 

3446 else: 

3447 self.shape = (samplesperpixel, imagedepth, imagelength, imagewidth) 

3448 self.axes = "SZYX" 

3449 else: 

3450 self._shape = (1, 1, imagedepth, imagelength, imagewidth, 1) 

3451 if imagedepth == 1: 

3452 self.shape = (imagelength, imagewidth) 

3453 self.axes = "YX" 

3454 else: 

3455 self.shape = (imagedepth, imagelength, imagewidth) 

3456 self.axes = "ZYX" 

3457 

3458 # dataoffsets and databytecounts 

3459 if "TileOffsets" in tags: 

3460 self.dataoffsets = tags["TileOffsets"].value 

3461 elif "StripOffsets" in tags: 

3462 self.dataoffsets = tags["StripOffsets"].value 

3463 else: 

3464 self.dataoffsets = (0,) 

3465 

3466 if "TileByteCounts" in tags: 

3467 self.databytecounts = tags["TileByteCounts"].value 

3468 elif "StripByteCounts" in tags: 

3469 self.databytecounts = tags["StripByteCounts"].value 

3470 else: 

3471 self.databytecounts = (product(self.shape) * (self.bitspersample // 8),) 

3472 if self.compression != 1: 

3473 warnings.warn("required ByteCounts tag is missing") 

3474 

3475 assert len(self.shape) == len(self.axes) 

3476 

3477 def asarray( 

3478 self, 

3479 out=None, 

3480 squeeze=True, 

3481 lock=None, 

3482 reopen=True, 

3483 maxsize=2**44, 

3484 validate=True, 

3485 ): 

3486 """Read image data from file and return as numpy array. 

3487 

3488 Raise ValueError if format is unsupported. 

3489 

3490 Parameters 

3491 ---------- 

3492 out : numpy.ndarray, str, or file-like object; optional 

3493 Buffer where image data will be saved. 

3494 If None (default), a new array will be created. 

3495 If numpy.ndarray, a writable array of compatible dtype and shape. 

3496 If 'memmap', directly memory-map the image data in the TIFF file 

3497 if possible; else create a memory-mapped array in a temporary file. 

3498 If str or open file, the file name or file object used to 

3499 create a memory-map to an array stored in a binary file on disk. 

3500 squeeze : bool 

3501 If True, all length-1 dimensions (except X and Y) are 

3502 squeezed out from the array. 

3503 If False, the shape of the returned array might be different from 

3504 the page.shape. 

3505 lock : {RLock, NullContext} 

3506 A reentrant lock used to synchronize reads from file. 

3507 If None (default), the lock of the parent's filehandle is used. 

3508 reopen : bool 

3509 If True (default) and the parent file handle is closed, the file 

3510 is temporarily re-opened and closed if no exception occurs. 

3511 maxsize: int or None 

3512 Maximum size of data before a ValueError is raised. 

3513 Can be used to catch DOS. Default: 16 TB. 

3514 validate : bool 

3515 If True (default), validate various parameters. 

3516 If None, only validate parameters and return None. 

3517 

3518 """ 

3519 self_ = self 

3520 self = self.keyframe # self or keyframe 

3521 

3522 if not self._shape or product(self._shape) == 0: 

3523 return 

3524 

3525 tags = self.tags 

3526 

3527 if validate or validate is None: 

3528 if maxsize and product(self._shape) > maxsize: 

3529 raise ValueError("data are too large %s" % str(self._shape)) 

3530 if self.dtype is None: 

3531 raise ValueError( 

3532 "data type not supported: %s%i" 

3533 % (self.sampleformat, self.bitspersample) 

3534 ) 

3535 if self.compression not in TIFF.DECOMPESSORS: 

3536 raise ValueError("cannot decompress %s" % self.compression.name) 

3537 if "SampleFormat" in tags: 

3538 tag = tags["SampleFormat"] 

3539 if tag.count != 1 and any((i - tag.value[0] for i in tag.value)): 

3540 raise ValueError("sample formats do not match %s" % tag.value) 

3541 if self.is_chroma_subsampled and ( 

3542 self.compression != 7 or self.planarconfig == 2 

3543 ): 

3544 raise NotImplementedError("chroma subsampling not supported") 

3545 if validate is None: 

3546 return 

3547 

3548 fh = self_.parent.filehandle 

3549 lock = fh.lock if lock is None else lock 

3550 with lock: 

3551 closed = fh.closed 

3552 if closed: 

3553 if reopen: 

3554 fh.open() 

3555 else: 

3556 raise IOError("file handle is closed") 

3557 

3558 dtype = self._dtype 

3559 shape = self._shape 

3560 imagewidth = self.imagewidth 

3561 imagelength = self.imagelength 

3562 imagedepth = self.imagedepth 

3563 bitspersample = self.bitspersample 

3564 typecode = self.parent.byteorder + dtype.char 

3565 lsb2msb = self.fillorder == 2 

3566 offsets, bytecounts = self_.offsets_bytecounts 

3567 istiled = self.is_tiled 

3568 

3569 if istiled: 

3570 tilewidth = self.tilewidth 

3571 tilelength = self.tilelength 

3572 tiledepth = self.tiledepth 

3573 tw = (imagewidth + tilewidth - 1) // tilewidth 

3574 tl = (imagelength + tilelength - 1) // tilelength 

3575 td = (imagedepth + tiledepth - 1) // tiledepth 

3576 shape = ( 

3577 shape[0], 

3578 shape[1], 

3579 td * tiledepth, 

3580 tl * tilelength, 

3581 tw * tilewidth, 

3582 shape[-1], 

3583 ) 

3584 tileshape = (tiledepth, tilelength, tilewidth, shape[-1]) 

3585 runlen = tilewidth 

3586 else: 

3587 runlen = imagewidth 

3588 

3589 if self.planarconfig == 1: 

3590 runlen *= self.samplesperpixel 

3591 

3592 if out == "memmap" and self.is_memmappable: 

3593 with lock: 

3594 result = fh.memmap_array(typecode, shape, offset=offsets[0]) 

3595 elif self.is_contiguous: 

3596 if out is not None: 

3597 out = create_output(out, shape, dtype) 

3598 with lock: 

3599 fh.seek(offsets[0]) 

3600 result = fh.read_array(typecode, product(shape), out=out) 

3601 if out is None and not result.dtype.isnative: 

3602 # swap byte order and dtype without copy 

3603 result.byteswap(True) 

3604 result = result.newbyteorder() 

3605 if lsb2msb: 

3606 reverse_bitorder(result) 

3607 else: 

3608 result = create_output(out, shape, dtype) 

3609 

3610 decompress = TIFF.DECOMPESSORS[self.compression] 

3611 

3612 if self.compression == 7: # COMPRESSION.JPEG 

3613 if bitspersample not in (8, 12): 

3614 raise ValueError("unsupported JPEG precision %i" % bitspersample) 

3615 if "JPEGTables" in tags: 

3616