Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników
Powiadomienia systemowe
  • Sesja wygasła!
  • Sesja wygasła!

Znaleziono wyników: 2

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  połączenie obrazu
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
EN
Computer tomography gives visualization of anatomical structures and abnormalities, but it lacks of functional information. On the other hand, single photon emission tomography provides the missing information about the tumour function, but it has relative low resolution and the localization of the visible focus may be difficult, especially when iodine ¹³¹I is used. Thus, several methods of image fusion are applied. We present an algorithm of image fusion based on affine transformation. On the base of a phantom study, we showed that the created program can be a useful tool to fuse CT and SPECT images and then applied to patients' datasets. External marker method was used to align patient functional and anatomical data. Image alignment quality depends on appropriate marker placement and acquisition protocol. The program estimates maximal misalignment in a volume between the markers. Created acquisition protocol minimizes misalignment of patient placement on both CT and gamma camera, however misalignment derived from respiratory movements cannot be avoided. The proposed technique is simple, low-cost and can be easily adopted in any hospital or diagnostic centre equipped with gamma camera and CT. Fusion of morphology and function can improve diagnostic accuracy in many clinical circumstances.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.