Stitching existing data
Last updated
Last updated
If your data have already been acquired and not pre-processed via syncAndCrunch
, you should handle them as follows. Each acquisition session should have its own directory set up like this. i.e. It needs:
A meta-data file (the "recipe" file).
A sub-directory called rawData
that houses the raw data directories. Move them here if they're not already here.
With the above in place you can use stitchAllChannels
(see help stitchAllChannels
) to automate the process of conducting all the pre-stitching analyses and then stitching. e.g.
Note that you should be using stitchit.sampleSplitter
once data have been stitched to crop out the sample from the larger acquisition area. In higher resolution acquisitions this can save many tens of GB.
If you look inside stitchAllChannels
you'll see it really doesn't do very much. Instead of using stitchAllChannels
you can run the required functions yourself one at a time like this:
That's it: those are the core commands in StitchIt. So what was going on there? generateTileIndex
indexes the tiles to map their names on to their position in the final volume. Then the preProcessTiles
command calculates coefficients for the comb correction and average tiles for illumination correction. The first input argument tells it to only work on directories that already haven’t been processed, the second and third arguments tell it to calculate bidirectional scanning coefficients and illumination correction on channels 1 and 2. Average image data for illumination correction are stored for each physical section within that section’s raw data directory. If interrupted, preProcessTiles
can resume where it left off. collateAverageImages
produces the grand average images used for illumination correction (data from the whole specimen contribute to the illumination correction). The above steps is most of what syncAndCrunch
does too.
In case you're wondering what these various corrections are good for, here's an example of the same section before and after illumination correction by dividing out the average tile.
As you can see, illumination correction is a big improvement but is not perfect. Want it perfect(ish)? Post-processing steps to the rescue!