Firat Ozdemir


Loading...

Last Name

Ozdemir

First Name

Firat

Organisational unit

02286 - Swiss Data Science Center (SDSC) / Swiss Data Science Center (SDSC)

Search Results

Publications1 - 10 of 22
  • Ciganovic, Matija; Ozdemir, Firat; Péan, Fabien; et al. (2018)
    International Journal of Computer Assisted Radiology and Surgery
    Purpose For guidance of orthopedic surgery, the registration of preoperative images and corresponding surgical plans with the surgical setting can be of great value. Ultrasound (US) is an ideal modality for surgical guidance, as it is non-ionizing, real time, easy to use, and requires minimal (magnetic/radiation) safety limitations. By extracting bone surfaces from 3D freehand US and registering these to preoperative bone models, complementary information from these modalities can be fused and presented in the surgical realm. Methods A partial bone surface is extracted from US using phase symmetry and a factor graph-based approach. This is registered to the detailed 3D bone model, conventionally generated for preoperative planning, based on a proposed multi-initialization and surface-based scheme robust to partial surfaces. Results 36 forearm US volumes acquired using a tracked US probe were independently registered to a 3D model of the radius, manually extracted from MRI. Given intraoperative time restrictions, a computationally efficient algorithm was determined based on a comparison of different approaches. For all 36 registrations, a mean (± SD) point-to-point surface distance of 0.57(±0.08) mm was obtained from manual gold standard US bone annotations (not used during the registration) to the 3D bone model. Conclusions A registration framework based on the bone surface extraction from 3D freehand US and a subsequent fast, automatic surface alignment robust to single-sided view and large false-positive rates from US was shown to achieve registration accuracy feasible for practical orthopedic scenarios and a qualitative outcome indicating good visual image alignment.
  • Bertoli, Guillaume; Mohebi Ganjabadi, Salman; Ozdemir, Firat; et al. (2025)
    Journal of Advances in Modeling Earth Systems
    This paper explores Machine Learning (ML) parameterizations for radiative transfer in the ICOsahedral Nonhydrostatic weather and climate model (ICON) and investigates the achieved ML model speed‐up with ICON running on graphics processing units (GPUs). Five ML models, with varying complexity and size, are coupled to ICON; more specifically, a multilayer perceptron (MLP), a Unet model, a bidirectional recurrent neural network with long short‐term memory (BiLSTM), a vision transformer (ViT), and a random forest (RF) as a baseline. The ML parameterizations are coupled to the ICON code that includes OpenACC compiler directives to enable GPU support. The coupling is done with the PyTorch‐Fortran coupler developed at NVIDIA. The most accurate model is the BiLSTM with a physics‐informed normalization strategy, a penalty for the heating rates during training, a Gaussian smoothing as postprocessing and a simplified computation of the fluxes at the upper levels to ensure stability of the ICON model top. The presented setup enables stable aquaplanet simulations with ICON for several weeks at a resolution of about 80 km and compares well with the physics‐based default radiative transfer parameterization, ecRad. Our results indicate that the compute requirements of the ML models that can ensure the stability of ICON are comparable to GPU optimized classical physics parameterizations in terms of memory consumption and computational speed.
  • Ciganovic, Matija; Ozdemir, Firat; Farshad, Mazda; et al. (2019)
    Proceedings of SPIE ~ Medical Imaging 2019: Ultrasonic Imaging and Tomography
    For computer-assisted interventions in orthopedic surgery, automatic bone surface delineation can be of great value. For instance, given such a method, an automatically extracted bone surface from intraoperative imaging modalities can be registered to the bone surfaces from preoperative images, allowing for enhanced visualization and/or surgical guidance. Ultrasound (US) is ideal for imaging bone surfaces intraoperatively, being real-time, non-ionizing, and cost-effective. However, due to its low signal-to-noise ratio and imaging artifacts, extracting bone surfaces automatically from such images remains challenging. In this work, we examine the suitability of deep learning for automatic bone surface extraction from US. Given 1800 manually annotated US frames, we examine the performance of two popular neural networks used for segmentation. Furthermore, we investigate the effect of different preprocessing methods used for manual annotations in training on the final segmentation quality, and demonstrate excellent qualitative and quantitative segmentation results.
  • Thoma, Janine; Ozdemir, Firat; Goksel, Orcun (2016)
    Lecture Notes in Computer Science ~ Medical Computer Vision and Bayesian and Graphical Models for Biomedical Imaging
  • Ozdemir, Firat; Fuernstahl, Philipp; Goksel, Orcun (2018)
    Lecture Notes in Computer Science ~ Medical Image Computing and Computer Assisted Intervention – MICCAI 2018
  • Ozdemir, Firat; Goksel, Orcun (2019)
    International Journal of Computer Assisted Radiology and Surgery
    Purpose For comprehensive surgical planning with sophisticated patient-specific models, all relevant anatomical structures need to be segmented. This could be achieved using deep neural networks given sufficiently many annotated samples; however, datasets of multiple annotated structures are often unavailable in practice and costly to procure. Therefore, being able to build segmentation models with datasets from different studies and centers in an incremental fashion is highly desirable. Methods We propose a class-incremental framework for extending a deep segmentation network to new anatomical structures using a minimal incremental annotation set. Through distilling knowledge from the current network state, we overcome the need for a full retraining. Results We evaluate our methods on 100 MR volumes from SKI10 challenge with varying incremental annotation ratios. For 50% incremental annotations, our proposed method suffers less than 1% Dice score loss in retaining old-class performance, as opposed to 25% loss of conventional finetuning. Our framework inherently exploits transferable knowledge from previously trained structures to incremental tasks, demonstrated by results superior even to non-incremental training: In a single volume one-shot incremental learning setting, our method outperforms vanilla network performance by>11% in Dice. Conclusions With the presented method, new anatomical structures can be learned while retaining performance for older structures, without a major increase in complexity and memory footprint, hence suitable for lifelong class-incremental learning. By leveraging information from older examples, a fraction of annotations can be sufficient for incrementally building comprehensive segmentation models. With our meta-method, a deep segmentation network is extended with only a minor addition per structure, thus can be applicable also for future network architectures.
  • Ozdemir, Firat; Tanner, Christine; Goksel, Orcun (2020)
    arXiv
    Bone surface delineation in ultrasound is of interest due to its potential in diagnosis, surgical planning, and post-operative follow-up in orthopedics, as well as the potential of using bones as anatomical landmarks in surgical navigation. We herein propose a method to encode the physics of ultrasound propagation into a factor graph formulation for the purpose of bone surface delineation. In this graph structure, unary node potentials encode the local likelihood for being a soft tissue or acoustic-shadow (behind bone surface) region, both learned through image descriptors. Pair-wise edge potentials encode ultrasound propagation constraints of bone surfaces given their large acoustic-impedance difference. We evaluate the proposed method in comparison with four earlier approaches, on in-vivo ultrasound images collected from dorsal and volar views of the forearm. The proposed method achieves an average root-mean-square error and symmetric Hausdorff distance of 0.28mm and 1.78mm, respectively. It detects 99.9% of the annotated bone surfaces with a mean scanline error (distance to annotations) of 0.39mm.
  • So, Kwok Kai; Salamanca Miño, Luis; Ozdemir, Firat; et al. (2024)
    Proceedings of the ASME Turbo Expo 2024: Turbomachinery Technical Conference and Exposition. Volume 12D: Turbomachinery — Multidisciplinary Design Approaches, Optimization, and Uncertainty Quantification; Radial Turbomachinery Aerodynamics; Unsteady Flows in Turbomachinery
    A data-driven inverse design method based on neural networks is proposed for turbomachinery. In the devised methodology, design parameters are provided as input to the neural network, and performance attributes, e.g. efficiency, as output. Once trained, the network is used for inverse design, i.e. target performance values are prescribed to generate various design parameter sets. Through empirical experiments, it is observed that the true efficiency of the network-generated radial turbine and thrust bearing designs are in good agreement with the prescribed target efficiencies. Furthermore, the proposed model can be used to accurately generate designs beyond the range of the training data set, showing strong generalization properties. The proposed approach offers the additional benefit of easy implementation, being fully data-driven and not requiring any modifications to the data generation process. As long as data is available, the method can readily be extended to account for multi-component and multi-physics aspects. After a model is trained using a set of design parameters, the proposed method can also be used to generate a subset of design parameters for prescribed target attributes with ease. The data-driven inverse design method represents a novel design approach in the age of big data, and is highly relevant and applicable for turbomachinery designs where an abundance of design data, pairing design parameters and target attributes is available. Notably, through generating a large variety of accurate designs with improved performance, the proposed method effectively indicates trust regions in the design space where further design explorations could be carried out, showing a significant impact on optimisation strategies.
  • Ozdemir, Firat; Karani, Neerav; Fürnstahl, Philipp; et al. (2017)
    International Journal of Computer Assisted Radiology and Surgery
    Purpose Planning orthopedic surgeries is commonly performed in computed tomography (CT) images due to the higher contrast of bony structure. However, soft tissues such as muscles and ligaments that may determine the functional outcome of a procedure are not easy to identify in CT, for which fast and accurate segmentation in MRI would be desirable. To be usable in daily practice, such method should provide convenient means of interaction for modifications and corrections, e.g., during perusal by the surgeon or the planning physician for quality control. Methods We propose an interactive segmentation framework for MR images and evaluate the outcome for segmentation of bones. We use a random forest classification and a random walker-based spatial regularization. The latter enables the incorporation of user input as well as enforcing a single connected anatomical structures, thanks to which a selective sampling strategy is proposed to substantially improve the supervised learning performance. Results We evaluated our segmentation framework on 10 patient humerus MRI as well as 4 high-resolution MRI from volunteers. Interactive humerus segmentations for patients took on average 150 s with over 3.5 times time-gain compared to manual segmentations, with accuracies comparable (converging) to that of much longer interactions. For high-resolution data, a novel multi-resolution random walker strategy further reduced the run time over 20 times of the manual segmentation, allowing for a feasible interactive segmentation framework. Conclusions We present a segmentation framework that allows iterative corrections leading to substantial speed gains in bone annotation in MRI. This will allow us to pursue semi-automatic segmentations of other musculoskeletal anatomy first in a user-in-the-loop manner, where later less user interactions or perhaps only few for quality control will be necessary as our annotation suggestions improve.
  • Susmelj, Anna Klimovskaia; Lafci, Berkan; Ozdemir, Firat; et al. (2024)
    Medical Image Analysis
    Optoacoustic (OA) imaging is based on optical excitation of biological tissues with nanosecond-duration laser pulses and detection of ultrasound (US) waves generated by thermoelastic expansion following light absorption. The image quality and fidelity of OA images critically depend on the extent of tomographic coverage provided by the US detector arrays. However, full tomographic coverage is not always possible due to experimental constraints. One major challenge concerns an efficient integration between OA and pulse-echo US measurements using the same transducer array. A common approach toward the hybridization consists in using standard linear transducer arrays, which readily results in arc-type artifacts and distorted shapes in OA images due to the limited angular coverage. Deep learning methods have been proposed to mitigate limited-view artifacts in OA reconstructions by mapping artifactual to artifact-free (ground truth) images. However, acquisition of ground truth data with full angular coverage is not always possible, particularly when using handheld probes in a clinical setting. Deep learning methods operating in the image domain are then commonly based on networks trained on simulated data. This approach is yet incapable of transferring the learned features between two domains, which results in poor performance on experimental data. Here, we propose a signal domain adaptation network (SDAN) consisting of i) a domain adaptation network to reduce the domain gap between simulated and experimental signals and ii) a sides prediction network to complement the missing signals in limited-view OA datasets acquired from a human forearm by means of a handheld linear transducer array. The proposed method showed improved performance in reducing limited-view artifacts without the need for ground truth signals from full tomographic acquisitions.
Publications1 - 10 of 22