MTracker is a tool for automatic splining tongue shapes in ultrasound images by harnessing the power of deep convolutional neural networks. It is developed at the Department of Linguistics, University of Michgian to address the need of spining a large-scale ultrasound project. MTracker also allows for human correction by interfacing with the GetContours Suite developed by Mark Tiede at Haskins Laboratory.
MTracker was developed by Jian Zhu, Will Styler, and Ian Calloway at the University of Michigan, on the basis of data collected with Patrice Beddor and Andries Coetzee.
It was first described in a poster at the 175th Meeting of the Acoustical Society of America in Minneapolis. The tools, and trained model, will be made available below.
Practically speaking, we implemented the U-net architecture (Ronneberger et al. 20151) in Python 3.5, Keras, and Tensorflow, which learns from human-annotated splines using repeated convolution and max-pooling layers for feature extraction (which simplify the image in feature-identifying ways), as well as skip connections, which reuse low level features to generate more spatially precise predictions of the tongue contours.
For now, until a peer-reviewed publication is available, you can cite this software as:
Zhu, J., Styler, W., and Calloway, I. C. (2018). Automatic tongue contour extraction in ultrasound images with convolutional neural networks. The Journal of the Acoustical Society of America, 143(3):1966–1966. https://doi.org/10.1121/1.5036466
You can also send people to this website, https://github.com/lingjzhu/mtracker.github.io/.
To install MTracker for your own use, follow the instructions below:
- Python 3.5
- Keras
- Tensorflow 1.4.1
- Scikit-image 1.3.0
- CUDA and CUDNN(if you need GPU support)
Installing Tensorflow can be a painful process. Please refer to the official documentation of Tensorflow and Keras for installation guides.
(This content coming soon!)
(This content coming soon!)
This software, although in production use by the authors and others at the University of Michigan, may still have bugs and quirks, alongside the difficulties and provisos which are described throughout the documentation.
By using this software, you acknowledge:
-
That you understand that this software does not produce perfect camera-ready data, and that all results should be hand-checked for sanity's sake, or at the very least, noise should be taken into account.
-
That you understand that this software is a work in progress which may contain bugs. Future versions will be released, and bug fixes (and additions) will not necessarily be advertised.
-
That this software may break with future updates of the various dependencies, and that the authors are not required to repair the package when that happens.
-
That you understand that the authors are not required or necessarily available to fix bugs which are encountered (although you're welcome to submit bug reports to Jian Zhu ([email protected]), if needed), nor to modify the software to your needs.
-
That you will acknowledge the authors of the software if you use, modify, fork, or re-use the code in your future work.
-
That rather than re-distributing this software to other researchers, you will instead advise them to download the latest version from the website.
... and, most importantly:
- That neither the authors, our collaborators, nor the the University of Michigan on the whole, are responsible for the results obtained from the proper or improper usage of the software, and that the software is provided as-is, as a service to our fellow linguists.
All that said, thanks for using our software, and we hope it works wonderfully for you!
Please contact Jian Zhu ([email protected]), Will Styler ([email protected]) and Ian Calloway ([email protected]) for support.
Footnotes
-
Ronneberger et al. 2015, U-Net: Convolutional Networks for Biomedical Image Segmentation, DOI:10.1007/978-3-319-24574-4_28 ↩