The Med*A-Eye platform is a device-agnostic decision-support system for medical images. The objective is to help clinicians use time efficiently and prioritize image regions most likely to contain abnormalities.
Med*A-Eye Mammo uses a stack of neural networks each optimized for a different type of feature. Object detectors identify calcifications (green boxes) and masses (red regions) with high precision. A classifier identifies mass regions with less precision but high recall (yellow regions). And an instance segmentation algorithm identifies architectural distortions (blue regions). Peer-reviewed reference: A deep learning architecture with an object-detection algorithm and a convolutional neural network for breast mass detection and visualization (Healthcare Analytics)
Med*A-Eye StainSync is a web-based app that allows you to upload H&E and IHC-stained slides for registration and cell counting. Our registration module aligns the slides and makes them available for display and download. You can zoom into and pan across any slide, and then see the exact same view in any other slide. You can also define regions of interest for analysis. We’ll assess stain strength for each IHC slide, and report the staining fraction in accordance with currently applicable guidelines for the relevant IHC clone.
You can download the registered slides at a desired resolution along with our stain analysis. But even before you define ROIs, you can drag any of the IHC tabs onto the H&E tab to see the selected IHC staining pattern overlaid on the H&E slide. You can control the intensity of the overlay with a slide switch.
Here are two short videos that walk you through the operation and basic features:
Med*A-Eye Patho uses two parallel branches, one for mapping and one for subtyping. First, a "whole-slide image" is rescaled. In the mapping or "segmentation" branch, tiles are generated from an image optimally sized for mapping, sifted, and analyzed by differently trained neural networks to produce a consensus segmentation mask. In the subtyping branch, the mask is resized to match the image rescaled for subtyping and used to exclude regions unlikely to be diseased. Tiles are generated from the unmasked regions of the subtype image, sifted, and analyzed by neural networks trained for subtyping to produce a subtype prediction. Peer-reviewed references: Accurate diagnostic tissue segmentation and concurrent disease subtyping with small datasets (Journal of Pathology Informatics); Resource-frugal classification and analysis of pathology slides using image entropy (Biomedical Signal Processing and Control)