Input options

Table of Contents

Input fields

Number of input channels

This field is used to declare how many the section called “Input channels” should be made available. New input channel configuration tabs are added or removed as needed when this field is changed.

Input type

This field is used to select the general type of input that should be used. If input should be read from some kind of file, select File here and the specific type of file in the File type field.

See Also File.

Input file

The location of the input data is selected in this field. You can put any existing file here. It's interpretation (file type) and file-type-specific input options are set in the File type field. This field is only available when you have selected File as Input type.

See Also File, File type.

File type

This field is used to select the type of data that are present in the input file. Usually, this is auto-detected, and when the Input file location is changed, this field is updated with the auto-determined file type for the selected input file.

See Also File, Input file.

Input method

This field is used to select the type of data that are present in the input file. Usually, this is auto-detected, and when the Input file location is changed, this field is updated with the auto-determined file type for the selected input file.

You can enforce the file type instead of auto-detection by explicitly selecting a choice in this field after entering a file name.

First image

This option allows skipping of the initial frames of the input. The frames are indexed sequentially from 0, and the value given here is inclusive. Consequently, the default value of 0 in this field causes all images to be loaded. A value of 5 would skip the first 5 images.

See Also Last image.

Last image

If this option is enabled, it allows limiting the length of the computation. All images past the given frame number, counted from 0 and excluding the given number, are skipped. For example, a value of 3 would case the four images with indices 0, 1, 2 and 3 to be computed.

See Also First image.

Mirror input data along Y axis

If this option is enabled, each input image is flipped such that the topmost row in each image becomes the bottommost row.

Dual view

This option allows splitting an image that contains two logical layers. For example, if the reflected beam of a beam splitter is projected onto the left side of the camera and the transmitted on the right side, this output allows to separate them into two different layers that can be used for 3D estimation.

The options are None, which indicates no splitting at all, Left and right, which splits the image at X = Width/2, and Top and bottom, which splits the image at Y = Height/2.

The dual view output always splits images exactly in the middle. Use plane alignment to configure other shifts.

Processed planes

This option allows selecting a single layer from a multi-layer acquisition. The default option, All planes, has no effect. All other options select a single layer from the available input layers, which will be the only layer that is processed.

Camera response to photon

This option allows to give the mean intensity each photon generates on the camera. In other terms, a value of 5 in this field assumes that the value of a pixel goes up by 5 (on average) every time the pixel is hit by a photon.

Dark intensity

This option allows to give the dark intensity of the camera, i.e. the mean pixel value for a pixel that is hit by no photons during the integration time.

Plane alignment

This option allows to describe the mapping from input pixels to sample space coordinates. For the majority of single-layer applications, the default choice of No alignment is appropriate.

See Also No alignment, Linear alignment, Support point alignment.

Z calibration file

The filename of a Z calibration file (the section called “Z calibration file”) should be given in this option. The calibration file will be used to determine PSF widths as a function of the Z coordinate during fitting.

Join inputs on

When multiple channels are selected, this field is used to declare how these multiple channels are combined into a single image or dataset. The available options are:

  1. Spatial joining in X/Y dimension, which means that images or datasets are pasted next to each other. For spatial joining of image data in the X/Y dimension, the not-joined dimension (e.g. the Y dimension for X joining) must match.
  2. Spatial joining in Z dimension. When processing images, each input channel is considered as a separate layer, with the first channel forming the first layer. For localization data, this is identical to X/Y joining in Z dimension.
  3. Joining in time. Channels are processed after each other in the sequence of their declaration. For images, the dimensions of the images must match.
Output file basename

This is the location and filename prefix for rapidSTORM output files. More precisely, the filenames for rapidSTORM's output files are produced by adding a file-specific suffix to the value of this field.

This field is automatically set when a new input file is selected.

Fluorophore types

The number of fluorophore types present in the input should be given here. When multiple fluorophores are selected, Transmission of fluorophore N fields can be used to characterize the spectra.

Size of one input pixel

This field gives the size of the sample part that is imaged in a single camera pixel. Typically, this value should be in the order of 100 nm. See [Thompson2002] for a discussion about ideal values.

PSF FWHM

The full width at half maximum of the optical point spread function. More precisely, the typical width of an emitter's image should be entered here, including fluorophore size and camera pixelation effects. rapidSTORM will fit spots in the images with a Gaussian with the same FWHM as given here. If the PSF is unknown, it can be determined semi-automatically by using the Estimate PSF form output.

3D PSF model

The 3D PSF model denotes the functional form of the PSF's change with respect to the emitter's Z position.

See Also No 3D, Polynomial 3D, Interpolated 3D.

No 3D

The PSF width is constant, and no Z coordinate is considered.

Polynomial 3D

The polynomial 3D model (see the section called “Polynomial model”) determines PSF widths from the Z coordinate.

Interpolated 3D

The piecewise cubic 3D model (see the section called “Piecewise cubic model”) determines PSF widths from the Z coordinate.

PSF FWHM at sharpest Z

The width of the PSF at the Z position indicated by Point of sharpest Z. This is typically the same value as PSF FWHM.

Point of sharpest Z

The Z coordinate of the focal plane (the plane where the PSF has the lowest width). You can indicate astigmatism by giving different positions for the two dimensions of one layer, or biplane by giving different positions for two layers.

Maximum Z range

The PSF model will be considered valid up to this distance from the point of sharpest Z. Any considered Z coordinate further away than this value will be immediately discarded.

Widening slopes

These entries give the speed of PSF growth. They have to be determined experimentally, and we know of no reliable method to do so. For more information, see the section called “Polynomial model”.

Localizations file

This input driver can be used to read the files written by the Localizations file output module. The file format is documented with the output module.

Andor SIF file

This input driver can read the Andor SIF file format produced by Andor Technology software such as SOLIS.

SIF files are stored in an uncompressed binary format with a simple text header. Because reading SIF files cannot be implemented in a forward-compatible way (reading new SIF files with old software), this driver might be unable to open the file; in this case, an error message is shown indicating the maximum known and the encountered version of the Andor SIF structure. Please obtain a newer version of this software in this case.

TIFF file

This input driver reads a multipage TIFF stack. All images in the TIFF stack must have the same size, and be greyscale images of up to 64 bit depth. Both integer and floating point data are allowed, even though all data are converted internally to 16 bit unsigned format.

No alignment

The upper left pixel is assumed to be at (0,0) nanometers. The Size of one input pixel field gives the offset of each pixel to the next. For example, if the pixel sizes are 100 nm in X and 110 nm in Y, the pixel at (10,15) is at (1,1.65) μm.

Linear alignment

Naive pixel coordinates are computed identically to No alignment and then transformed by an affine transformation matrix.

Support point alignment

Naive pixel coordinates are computed identically to No alignment. A nonlinear transformation image is read from a file. The naive coordinates are projected into the source image of the nonlinear transformation, and the transformation is applied with linear interpolation.

bUnwarpJ transformation

A bUnwarpJ raw transformation file is given in this field to characterize the channel alignment.

Transformation resolution

The resolution (pixel size) of the file given in bUnwarpJ transformation. This resolution is not necessarily the same as the value of Size of one input pixel. For example, you can super-resolve two images with an easily aligned structure such as the nuclear pore complex ([Loeschberger2012]), which will result in a raw transformation with a 10 nm resolution.

Plane alignment file

A plain text file containing a 3x3 affine matrix for linear alignment, with the translation given in meters. The matrix is assumed to yield the aligned coordinate positions if multiplied with a vector (x, y, 1), where x and y are the unaligned coordinates. The upper left 2x2 part of the matrix is a classic rotation/scaling matrix, the elements (0,2) and (1,2) give a translation. For now, the bottom row must be (0,0,1), i.e. not a projective transform.

File

This choice for Input method indicates that a file is used for input instead of generating the input stochastically or reading directly from a camera.

Transmission of fluorophore N

There is one transmission coefficient field for each layer and fluorophore. The fields give the relative intensities of each fluorophore in each layer, and will be used as scaling factors for the intensity for multi-colour inference or for biplane imaging.

As an example, values of 0.1 in layer 1, fluorophore 0 and 0.9 in layer 2, fluorophore 0 would indicate that 90\% of the photons arrive on the second camera and 10\% on the first.

Background filtering

This field is used to choose one type of background filter. Background filtering provides information about the intensity of non-interesting background fluorescence to the engine. The engine ignores background fluorescence during the spot finding and fitting. You can use background filtering to enhance imaging precision on samples with inhomogenouos background or to suppress regions with too high fluorophore density.