Introduction to VivoScript – Part III: Automated Segmentation

Introduction to VivoScript – Part III: Automated Segmentation

Histogram based auto-thresholding (based on Otsu’s algorithm) is a great way to automatically segment two Gaussian distributions of values, e.g. background vs. animal. In this post we are looking into automating this process and placing the segmentation function into a library file so it can be reused in other scripts.

In computer vision and image processing, Otsu’s method is used to automatically perform histogram shape-based image thresholding,[1] or, the reduction of a graylevel image to a binary image. The algorithm assumes that the image to be thresholded contains two classes of pixels or bi-modal histogram (e.g. foreground and background) then calculates the optimum threshold separating those two classes so that their combined spread (intra-class variance) is minimal.[2] The extension of the original method to multi-level thresholding is referred to as the Multi Otsu method.[3] Otsu’s method is named after Nobuyuki Otsu (大津展之 Ōtsu Nobuyuki?).
http://en.wikipedia.org/wiki/Otsu%27s_method

 

VivoScript Libraries

In this example we use a function doubleOtsu() from a VivoScript library we have added to our script with

#include "DoubleOtsu.vqs"

This pulls in the text of a ‘DoubleOtsu.vqs’ file which is located in VivoQuant’s or your personal VivoScript directories. Please create the respective directory depending on your operating system and copy this file, named ‘DoubleOtsu.vqs’ there:

[accordion]
[acc_item title=”Windows”]C:\User\USERNAME\Documents\VivoScript[/acc_item]
[acc_item title=”MacOSX”]/Users/USERNAME/Library/VivoQuant/VivoScript[/acc_item]
[acc_item title=”Linux”]/home/USERNAME/VivoScript[/acc_item]
[/accordion]

function doubleOtsu() {
    // go into 3D ROI mode:
    VQ.mainWin().setViewMode("Slice View", "3D ROI Tool");
	VQ.vtkController().setAutoUpdate(false);

    // switch page to Segmentation algorithms,
    // not really required, but easier to debug
    VQ.getWidget("tabWidget").setCurrentIndex(3);  

    // shortcut:
    var op = VQ.currentOp();

    // create ROIs:
    op.addROI("Otsu 1", "red");
    op.addROI("Bone", "peachpuff");

    // select Input/Output ROI:
    op.setInputROI(0);
    op.setCurrentROI(1);

    // Selecting algorithm:
    VQ.getWidget("MagicSegmentationSelector").setCurrentIndex(3);
    // Would be nice to have: widget.setEditText("Otsu Thresholding");

    // Apply algorithm:
    VQ.debug("Setup first otsu");
    VQ.getWidget("MagicApply").click();

    // update Input/Output ROI:
    op.setInputROI(1);
    op.setCurrentROI(2);

    // Apply again:
    VQ.debug("Setup 2nd otsu");
    VQ.getWidget("MagicApply").click();

    // Hide temporary ROI (name, color, immutable, hidden)
    VQ.debug("Double otsu done");
	VQ.vtkController().setAutoUpdate(true);
    op.editROI(1, "Temp", "red", false, true);
}

This function contains the actual double Otsu algorithm, which can be reused in different scripts.

The Script Details

#include "DoubleOtsu.vqs"

var rep = "ipacss://vqintro:blog42@training.ipacs.invicro.com";
var prj = "/examples";

// created using Loader.vqs:
var dm = VQ.dataManager();
{
  var dcmRep = VQ.dcmRep(rep);
  dcmRep.setProject(prj);
  var files = VQ.downloadImages(dcmRep,
    "1.3.6.1.4.1.12842.1.1.14.3.20100526.145653.234.3400598378",
    "1.3.6.1.4.1.33793.1.4.0.58478.1299815611.1");
  dm.openDat(0, files);
  dm.setDesc(0, "__repository_url", rep);
  dm.setDesc(0, "__project", prj);
}

doubleOtsu(); // imported from "Training/DoubleOtsu.vqs"

VQ.getWidget("buttonRenderROI").click(); // init VTK viewer

VQ.showMessage("Segmentation done.");

After having pulled in algorithm above, we are loading an example mouse CT dataset. This code shows again how to use VQ.downloadImages(…) as was discussed in Part I. This snippet of code was actually created by another VivoScript that comes with VivoQuant named ‘Loaded.vqs’. This script creates a small piece of code to load the currently loaded data again, thus allowing you to quickly paste this into further scripts:

// created using Loader.vqs:
var dm = VQ.dataManager();
{
  var dcmRep = VQ.dcmRep(rep);
  dcmRep.setProject(prj);
  var files = VQ.downloadImages(dcmRep,
    "1.3.6.1.4.1.12842.1.1.14.3.20100526.145653.234.3400598378",
    "1.3.6.1.4.1.33793.1.4.0.58478.1299815611.1");
  dm.openDat(0, files);
  dm.setDesc(0, "__repository_url", rep);
  dm.setDesc(0, "__project", prj);
}

Finally we then call the double Otsu algorithm, and trigger rendering:

doubleOtsu(); // imported from "Training/DoubleOtsu.vqs"

VQ.getWidget("buttonRenderROI").click(); // init VTK viewer

The Algorithm Details

You can just use the double Otsu algorithm in our scripts and do not care about its internal workings, or you can also dive into this code. At first we define a function, in this case without parameters, however, you can also add those easily:

function functionname() { ... }
function func_with_param(p1, p2, p3) {
  var res = p1 + p2 + p3;
  return res;
}

In the next step we change into VQ’s 3D ROI tool:

    VQ.mainWin().setViewMode("Slice View", "3D ROI Tool");

The parameters used here are the same names as used in VQ’s toolbar for the view and operator.

In most cases it is a good idea to turn off the automatic rendering after changes when running scripts, since this can improve the performance significantly:

	VQ.vtkController().setAutoUpdate(false);

Then we ask VQ for the current operator (the 3D ROI tool, set above)  and use this to create two ROIs, the red one for the first intermediary and the second peachpuff one for the actual bone:

    // create ROIs:
    op.addROI("Otsu 1", "red");
    op.addROI("Bone", "peachpuff");

    // select Input/Output ROI:
    op.setInputROI(0);
    op.setCurrentROI(1);

In the first step, we use background and the red ROI for the two distributions found by the Otsu algorithm. This means VQ will look at all voxels classified as background (and only those) and divides them up into background and foreground so that each group has a minimized variance (see above for details). These lines select the algorithm and trigger the application:

    // Selecting algorithm:
    VQ.getWidget("MagicSegmentationSelector").setCurrentIndex(3);
    // Would be nice to have: widget.setEditText("Otsu Thresholding");

    // Apply algorithm:
    VQ.debug("Setup first otsu");
    VQ.getWidget("MagicApply").click();

As you can see in above gallery (click on images to see the full size) this does a nice job to segment background from mouse body. As such a single Otsu segmentation is a great first step that can be followup up by further segmentations limited to just the body/red voxels. In this case here, we would like to apply another Otsu, however, this time we set the input ROI to be the body/red voxels, output to be the bone ROI and again find two distributions:

    // update Input/Output ROI:
    op.setInputROI(1);
    op.setCurrentROI(2);

    // Apply again:
    VQ.debug("Setup 2nd otsu");
    VQ.getWidget("MagicApply").click();

Finally, we turn on the automatic update again and hide the temporary body/red ROI and end up with a nice bone segmentation:

double-otsu-movie

In many cases a third Otsu again on the body/red voxels leads to a resonable estimate of air (lungs, air ways, in intestines) inside the body; which is left as an exercise to the reader.

Again, please feel free to leave comments and questions below.