B. Gecer, S. Aksoy, E. Mercan L.G. Shapiro, D.L. Weaver and J.G. Elmore, “Detection and classification of cancer in whole slide breast histopathology images using deep convolutional networks” Pattern Recognition, 2018.
In the past few years, a lot of work has been done towards reconstructing the 3D facial structure from single images by capitalizing on the power of Deep Convolutional Neural Networks (DCNNs). In the most recent works, differentiable renderers were employed in order to learn the relationship between the facial identity features and the parameters of a 3D morphable model for shape and texture. The texture features either correspond to components of a linear texture space or are learned by auto-encoders directly from in-the-wild images. In all cases, the quality of the facial texture reconstruction of the state-of-the-art methods is still not capable of modeling textures in high fidelity. In this paper, we take a radically different approach and harness the power of Generative Adversarial Networks (GANs) and DCNNs in order to reconstruct the facial texture and shape from single images. That is, we utilize GANs to train a very powerful generator of facial texture in UV space. Then, we revisit the original 3D Morphable Models (3DMMs) fitting approaches making use of non-linear optimization to find the optimal latent parameters that best reconstruct the test image but under a new perspective. We optimize the parameters with the supervision of pretrained deep identity features through our end-to-end differentiable framework. We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, to the best of our knowledge, facial texture reconstruction with high-frequency details.
We propose a novel end-to-end semi-supervised adversarial framework to generate photorealistic face images of new identities with a wide range of expressions, poses, and illuminations conditioned by synthetic images sampled from a 3D morphable model. Previous adversarial style-transfer methods either supervise their networks with a large volume of paired data or train highly under-constrained two-way generative networks in an unsupervised fashion. We propose a semi-supervised adversarial learning framework to constrain the twoway networks by a small number of paired real and synthetic images, along with a large volume of unpaired data. A set-based loss is also proposed to preserve identity coherence of generated images. Qualitative results show that generated face images of new identities contain pose, lighting and expression diversity. They are also highly constrained by the synthetic input images while adding photorealism and retaining identity information. We combine face images generated by the proposed method with a real data set to train face recognition algorithms and evaluate the model quantitatively on two challenging data sets: LFW and IJB-A. The generated images by our framework consistently improve the performance of deep face recognition networks trained with the Oxford VGG Face dataset, and achieve comparable results to the state-of-the-art.
Commonly studied scenario considers only binary cancer vs. no cancer classification.
Our system classifies whole slide breast biopsies into five diagnostic categories.
Pipeline of fully convolutional networks localizes diagnostically relevant regions.
Convolutional neural network classifies detected regions of interest in whole slides.
Experiments show that our method is compatible with predictions of 45 pathologists.
Generalizability of algorithms for binary cancer vs. no cancer classification is unknown for clinically more significant multi-class scenarios where intermediate categories have different risk factors and treatment strategies. We present a system that classifies whole slide images (WSI) of breast biopsies into five diagnostic categories. First, a saliency detector that uses a pipeline of four fully convolutional networks, trained with samples from records of pathologists’ screenings, performs multi-scale localization of diagnostically relevant regions of interest in WSI. Then, a convolutional network, trained from consensus-derived reference samples, classifies image patches as non-proliferative or proliferative changes, atypical ductal hyperplasia, ductal carcinoma in situ, and invasive carcinoma. Finally, the saliency and classification maps are fused for pixel-wise labeling and slide-level categorization. Experiments using 240 WSI showed that both saliency detector and classifier networks performed better than competing algorithms, and the five-class slide-level accuracy of 55% was not statistically different from the predictions of 45 pathologists. We also present example visualizations of the learned representations for breast cancer diagnosis.
In this work, we investigate several methods and strategies to learn deep embeddings for face recognition, using joint sample- and set-based optimization. We explain our framework that expands traditional learning with set-based supervision together with the strategies used to maintain set characteristics. We, then, briefly review the related set-based loss functions, and subsequently we propose a novel Max-Margin Loss which maximizes maximum possible inter-class margin with assistance of Support Vector Machines (SVMs). It implicitly pushes all the samples towards correct side of the margin with a vector perpendicular to the hyperplane and a strength inversely proportional to the distance to it. We show that the introduced loss outperform the previous sample-based and set-based ones in terms verification of faces on two commonly used benchmarks.
We propose novel color-blob-based COSFIRE filters.
They are effective for recognizing also objects with diffuse region boundaries.
Such a filter models (a part of) an object by a specific arrangement of color blobs.
The blobs contain information about the sizes and colors of the interior of regions.
We achieve high recognition rates: GTSRB (98.94%) and Butterfly (89.02%) data sets.
Most object recognition methods rely on contour-defined features obtained by edge detection or region segmentation. They are not robust to diffuse region boundaries. Furthermore, such methods do not exploit region color information. We propose color-blob-based COSFIRE (Combination of Shifted Filter Responses) filters to be selective for combinations of diffuse circular regions (blobs) in specific mutual spatial arrangements. Such a filter combines the responses of a certain selection of Difference-of-Gaussians filters, essentially blob detectors, of different scales, in certain channels of a color space, and at certain relative positions to each other. Its parameters are determined/learned in an automatic configuration process that analyzes the properties of a given prototype object of interest. We use these filters to compute features that are effective for the recognition of the prototype objects. We form feature vectors that we use with an SVM classifier. We evaluate the proposed method on a traffic sign (GTSRB) and a butterfly data sets. For the GTSRB data set we achieve a recognition rate of 98.94%, which is slightly higher than human performance and for the butterfly data set we achieve 89.02%. The proposed color-blob-based COSFIRE filters are very effective and outperform the contour-based COSFIRE filters. A COSFIRE filter is trainable, it can be configured with a single prototype pattern and it does not require domain knowledge.
The most frequent non-skin cancer type is breast cancer which is also named one of the most deadliest diseases where early and accurate diagnosis is critical for recovery. Recent medical image processing researches have demonstrated promising results that may contribute to the analysis of biopsy images by enhancing the understanding or by revealing possible unhealthy tissues during diagnosis. However, these studies focused on well-annotated and -cropped patches, whereas a fully automated computer-aided diagnosis (CAD) system requires whole slide histopathology image (WSI) processing which is, in fact, enormous in size and, therefore, difficult to process with a reasonable computational power and time. Moreover, those whole slide biopsies consist of healthy, benign and cancerous tissues at various stages and thus, simultaneous detection and classiffication of diagnostically relevant regions are challenging.
We propose a complete CAD system for efficient localization and classification of regions of interest (ROI) in WSI by employing state-of-the-art deep learning techniques. The system is developed to resemble organized work ow of expert pathologists by means of progressive zooming into details, and it consists of two separate sequential steps: (1) detection of ROIs in WSI, (2) classification of the detected ROIs into five diagnostic classes. The novel saliency detection approach intends to mimic efficient search patterns of experts at multiple resolutions by training four separate deep networks with the samples extracted from the tracking records of pathologists' viewing of WSIs. The detected relevant regions are fed to the classification step that includes a deeper network that produces probability maps for classes, followed by a post-processing step for final diagnosis.
In the experiments with 240 WSI, the proposed saliency detection approach outperforms a state-of-the-art method by means of both efficiency and eectiveness, and the final classification of our complete system obtains slightly lower accuracy than the mean of 45 pathologists' performance. According to the Mc- Nemar's statistical tests, we cannot reject that the accuracies of 32 out of 45 pathologists are not different from the proposed system. At the end, we also provide visualizations of our deep model with several advanced techniques for better understanding of the learned features and the overall information captured by the network