Krishna Chaitanya


Loading...

Last Name

Chaitanya

First Name

Krishna

Organisational unit

Search Results

Publications 1 - 10 of 16
  • Baumgartner, Christian; Tezcan, Kerem C.; Chaitanya, Krishna; et al. (2019)
    Lecture Notes in Computer Science ~ Medical Image Computing and Computer Assisted Intervention – MICCAI 2019
  • Erdil, Ertunc; Chaitanya, Krishna; Karani, Neerav; et al. (2021)
    Lecture Notes in Computer Science ~ Uncertainty for Safe Utilization of Machine Learning in Medical Imaging, and Perinatal Imaging, Placental and Preterm Image Analysis
    In the recent years, researchers proposed a number of successful methods to perform out-of-distribution (OOD) detection in deep neural networks (DNNs). So far the scope of the highly accurate methods has been limited to image level classification tasks. However, attempts for generally applicable methods beyond classification did not attain similar performance. In this paper, we address this limitation by proposing a simple yet effective task-agnostic OOD detection method. We estimate the probability density functions (pdfs) of intermediate features of a pre-trained DNN by performing kernel density estimation (KDE) on the training dataset. As direct application of KDE to feature maps is hindered by their high dimensionality, we use a set of lower-dimensional marginalized KDE models instead of a single high-dimensional one. At test time, we evaluate the pdfs on a test sample and produce a confidence score that indicates the sample is OOD. The use of KDE eliminates the need for making simplifying assumptions about the underlying feature pdfs and makes the proposed method task-agnostic. We perform experiments on classification task using computer vision benchmark datasets. Additionally, we perform experiments on medical image segmentation task using brain MRI datasets. The results demonstrate that the proposed method consistently achieves high OOD detection performance in both classification and segmentation tasks and improves state-of-the-art in almost all cases. Our code is available at https://github.com/eerdil/task_agnostic_ood. Longer version of the paper and supplementary materials can be found as preprint in [8].
  • Chaitanya, Krishna (2022)
    Accurate image segmentation is important for many downstream clinical applications like diagnosis, surgery planning. In recent years, deep neural networks have been quite successful in providing high segmentation performance with fully supervised learning approaches. The bottleneck for such approaches is the requirement of a large amount of annotated training data. Acquiring such large labeled datasets is difficult for medical images as we need clinical experts for the annotations, which is expensive and time-consuming. Hence, this is not a preferred solution in clinical settings. In this thesis, we focused on developing approaches to achieve high segmentation performance with limited annotations alongside leveraging the unlabeled data in training. The aim is to close the gap with the fully supervised approaches trained with large labeled datasets. To achieve this, we propose the following three approaches: In the first work, we propose a task-driven data augmentation method for learning with limited labeled data where the synthetic data generator is optimized for the segmentation task. The generator of the proposed method models intensity and shape variations using two sets of transformations, as additive intensity transformations and deformation fields. Both transformations are optimized using labeled as well as unlabeled examples in a semi-supervised framework. In the second work, we propose an approach to pre-train a neural network with unlabeled data using a self-supervised learning approach to obtain a good network initialization that can be later fine-tuned to a target task. Here, we propose strategies for extending the contrastive learning framework, a self-supervised learning method for segmentation of volumetric medical images in the semi-supervised setting with limited annotations, by leveraging domain-specific and problem-specific cues. Specifically, we propose (1) novel contrasting strategies that leverage structural similarity across volumetric medical images (domain-specific cue) and (2) a local version of the contrastive loss to learn distinctive representations of local regions that are useful for per-pixel segmentation (problem-specific cue). In the final work, we propose an end-to-end joint training framework to get high segmentation performance where we devise a contrastive loss to learn good local level features by exploiting semantic label information obtained from pseudo-labels of unlabeled images alongside limited annotated images. The pseudo-labels are rough segmentation mask estimates computed for the unlabeled images from a network initially trained with limited labeled images. In particular, we define the proposed loss to encourage similar representations for the pixels that have the same label and pseudo-label while being dissimilar to the representations of pixels from different classes in the dataset. We perform pseudo-label-based self-training and train the network by jointly optimizing the proposed contrastive loss on both labeled and unlabeled sets and segmentation loss on only the limited labeled set. All the above-proposed approaches have been evaluated extensively on three MRI/CT datasets. They yielded better results than the compared data augmentation, semi-supervised, and self-supervised learning approaches.
  • Bredell, Gustav; Flouris, Kyriakos; Chaitanya, Krishna; et al. (2023)
    The Eleventh International Conference on Learning Representations
    Variational autoencoders (VAEs) are powerful generative modelling methods, however they suffer from blurry generated samples and reconstructions compared to the images they have been trained on. Significant research effort has been spent to increase the generative capabilities by creating more flexible models but often flexibility comes at the cost of higher complexity and computational cost. Several works have focused on altering the reconstruction term of the evidence lower bound (ELBO), however, often at the expense of losing the mathematical link to maximizing the likelihood of the samples under the modeled distribution. Here we propose a new formulation of the reconstruction term for the VAE that specifically penalizes the generation of blurry images while at the same time still maximizing the ELBO under the modeled distribution. We show the potential of the proposed loss on three different data sets, where it outperforms several recently proposed reconstruction losses for VAEs.
  • Chaitanya, Krishna; Karani, Neerav; Baumgartner, Christian F.; et al. (2021)
    Medical Image Analysis
    Supervised learning-based segmentation methods typically require a large number of annotated training data to generalize well at test time. In medical applications, curating such datasets is not a favourable option because acquiring a large number of annotated samples from experts is time-consuming and expensive. Consequently, numerous methods have been proposed in the literature for learning with limited annotated examples. Unfortunately, the proposed approaches in the literature have not yet yielded significant gains over random data augmentation for image segmentation, where random augmentations themselves do not yield high accuracy. In this work, we propose a novel task-driven data augmentation method for learning with limited labeled data where the synthetic data generator, is optimized for the segmentation task. The generator of the proposed method models intensity and shape variations using two sets of transformations, as additive intensity transformations and deformation fields. Both transformations are optimized using labeled as well as unlabeled examples in a semi-supervised framework. Our experiments on three medical datasets, namely cardiac, prostate and pancreas, show that the proposed approach significantly outperforms standard augmentation and semi-supervised approaches for image segmentation in the limited annotation setting. The code is made publicly available at https://github.com/krishnabits001/task_driven_data_augmentation.
  • Can, Yigit B.; Chaitanya, Krishna; Mustafa, Basil; et al. (2018)
    Lecture Notes in Computer Science ~ Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support
  • Perakis, Alexis; Gorji, Ali; Jain, Samriddhi; et al. (2021)
    Lecture Notes in Computer Science ~ Machine Learning in Medical Imaging
    Learning robust representations to discriminate cell phenotypes based on microscopy images is important for drug discovery. Drug development efforts typically analyse thousands of cell images to screen for potential treatments. Early works focus on creating hand-engineered features from these images or learn such features with deep neural networks in a fully or weakly-supervised framework. Both require prior knowledge or labelled datasets. Therefore, subsequent works propose unsupervised approaches based on generative models to learn these representations. Recently, representations learned with self-supervised contrastive loss-based methods have yielded state-of-the-art results on various imaging tasks compared to earlier unsupervised approaches. In this work, we leverage a contrastive learning framework to learn appropriate representations from single-cell fluorescent microscopy images for the task of Mechanism-of-Action classification. The proposed work is evaluated on the annotated BBBC021 dataset, and we obtain state-of-the-art results in NSC, NCSB and drop metrics for an unsupervised approach. We observe an improvement of 10% in NCSB accuracy and 11% in NSC-NSCB drop over the previously best unsupervised method. Moreover, the performance of our unsupervised approach ties with the best supervised approach. Additionally, we observe that our framework performs well even without post-processing, unlike earlier methods. With this, we conclude that one can learn robust cell representations with contrastive learning. We make the code available on GitHub (https://github.com/SamriddhiJain/SimCLR-for-cell-profiling).
  • Chaitanya, Krishna; Erdil, Ertunç; Karani, Neerav; et al. (2023)
    Medical Image Analysis
    Supervised deep learning-based methods yield accurate results for medical image segmentation. However, they require large labeled datasets for this, and obtaining them is a laborious task that requires clinical expertise. Semi/self-supervised learning-based approaches address this limitation by exploiting unlabeled data along with limited annotated data. Recent self-supervised learning methods use contrastive loss to learn good global level representations from unlabeled images and achieve high performance in classification tasks on popular natural image datasets like ImageNet. In pixel-level prediction tasks such as segmentation, it is crucial to also learn good local level representations along with global representations to achieve better accuracy. However, the impact of the existing local contrastive loss-based methods remains limited for learning good local representations because similar and dissimilar local regions are defined based on random augmentations and spatial proximity; not based on the semantic label of local regions due to lack of large-scale expert annotations in the semi/self-supervised setting. In this paper, we propose a local contrastive loss to learn good pixel level features useful for segmentation by exploiting semantic label information obtained from pseudo-labels of unlabeled images alongside limited annotated images with ground truth (GT) labels. In particular, we define the proposed contrastive loss to encourage similar representations for the pixels that have the same pseudo-label/GT label while being dissimilar to the representation of pixels with different pseudo-label/GT label in the dataset. We perform pseudo-label based self-training and train the network by jointly optimizing the proposed contrastive loss on both labeled and unlabeled sets and segmentation loss on only the limited labeled set. We evaluated the proposed approach on three public medical datasets of cardiac and prostate anatomies, and obtain high segmentation performance with a limited labeled set of one or two 3D volumes. Extensive comparisons with the state-of-the-art semi-supervised and data augmentation methods and concurrent contrastive learning methods demonstrate the substantial improvement achieved by the proposed method. The code is made publicly available at https://github.com/krishnabits001/pseudo_label_contrastive_training.
  • Li, Hongwei; Xue, Fei-Fei; Chaitanya, Krishna; et al. (2021)
    Lecture Notes in Computer Science ~ Medical Image Computing and Computer Assisted Intervention – MICCAI 2021
    Radiomics can quantify the properties of regions of interest in medical image data. Classically, they account for pre-defined statistics of shape, texture, and other low-level image features. Alternatively, deep learning-based representations are derived from supervised learning but require expensive annotations and often suffer from overfitting and data imbalance issues. In this work, we address the challenge of learning the representation of a 3D medical image for an effective quantification under data imbalance. We propose a self-supervised representation learning framework to learn high-level features of 3D volumes as a complement to existing radiomics features. Specifically, we demonstrate how to learn image representations in a self-supervised fashion using a 3D Siamese network. More importantly, we deal with data imbalance by exploiting two unsupervised strategies: a) sample re-weighting, and b) balancing the composition of training batches. When combining the learned self-supervised feature with traditional radiomics, we show significant improvement in brain tumor classification and lung cancer staging tasks covering MRI and CT imaging modalities. Codes are available in https://github.com/hongweilibran/imbalanced-SSL.
  • Huber, Florian A.; Chaitanya, Krishna; Gross, Nico; et al. (2022)
    Investigative Radiology
    Objectives To develop, test, and validate a body composition profiling algorithm for automated segmentation of body compartments in whole-body magnetic resonance imaging (wbMRI) and to investigate the influence of different acquisition parameters on performance and robustness. Materials and Methods A segmentation algorithm for subcutaneous and visceral adipose tissue (SCAT and VAT) and total muscle mass (TMM) was designed using a deep learning U-net architecture convolutional neuronal network. Twenty clinical wbMRI scans were manually segmented and used as training, validation, and test datasets. Segmentation performance was then tested on different data, including different magnetic resonance imaging protocols and scanners with and without use of contrast media. Test-retest reliability on 2 consecutive scans of 16 healthy volunteers each as well as impact of parameters slice thickness, matrix resolution, and different coil settings were investigated. Sorensen-Dice coefficient (DSC) was used to measure the algorithms' performance with manual segmentations as reference standards. Test-retest reliability and parameter effects were investigated comparing respective compartment volumes. Abdominal volumes were compared with published normative values. Results Algorithm performance measured by DSC was 0.93 (SCAT) to 0.77 (VAT) using the test dataset. Dependent from the respective compartment, similar or slightly reduced performance was seen for other scanners and scan protocols (DSC ranging from 0.69–0.72 for VAT to 0.83–0.91 for SCAT). No significant differences in body composition profiling was seen on repetitive volunteer scans (P = 0.88–1) or after variation of protocol parameters (P = 0.07–1). Conclusions Body composition profiling from wbMRI by using a deep learning–based convolutional neuronal network algorithm for automated segmentation of body compartments is generally possible. First results indicate that robust and reproducible segmentations equally accurate to a manual expert may be expected also for a range of different acquisition parameters.
Publications 1 - 10 of 16