This study examines the effect of incorporating single-walled carbon nanotubes (SWCNTs) and multi-walled carbon nanotubes (MWCNTs) into carbon fiber reinforced polymers (CFRPs) based on Elium® thermoplastic acrylic resin and investigates the relationship between the studied properties. SWCNTs exhibited better dispersion in the matrix, which leads to better electrical conductivity (2.72 ± 0.34 S/m) and impact resistance (154 ± 14.6 kJ/m²) compared to MWCNTs. Microstructural analysis revealed a defect-free architecture of the SWCNT-modified laminates, while the MWCNT laminates showed small voids and agglomerates. The increased dispersion and interconnectivity of the SWCNTs contribute to an EMI shielding efficiency of 24.6 dB, a 30% improvement over the unmodified samples. These findings highlight the potential of SWCNTs to improve the multifunctional properties of thermoplastic CFRPs, including mechanical strength, electrical performance and EMI shielding capability, making them highly suitable for advanced aerospace, electronics and power applications. Moreover, the recyclability and lightweight nature of the Elium® resin matrix make these composites environmentally friendly and an alternative to traditional materials in a variety of industrial contexts.
The paper is focused on automatic segmentation task of bone structures out of CT data series of pelvic region. The authors trained and compared four different models of deep neural networks (FCN, PSPNet, U-net and Segnet) to perform the segmentation task of three following classes: background, patient outline and bones. The mean and class-wise Intersection over Union (IoU), Dice coefficient and pixel accuracy measures were evaluated for each network outcome. In the initial phase all of the networks were trained for 10 epochs. The most exact segmentation results were obtained with the use of U-net model, with mean IoU value equal to 93.2%. The results where further outperformed with the U-net model modification with ResNet50 model used as the encoder, trained by 30 epochs, which obtained following result: mIoU measure – 96.92%, “bone” class IoU – 92.87%, mDice coefficient – 98.41%, mDice coefficient for “bone” – 96.31%, mAccuracy – 99.85% and Accuracy for “bone” class – 99.92%.
For brain tumour treatment plans, the diagnoses and predictions made by medical doctors and radiologists are dependent on medical imaging. Obtaining clinically meaningful information from various imaging modalities such as computerized tomography (CT), positron emission tomography (PET) and magnetic resonance (MR) scans are the core methods in software and advanced screening utilized by radiologists. In this paper, a universal and complex framework for two parts of the dose control process – tumours detection and tumours area segmentation from medical images is introduced. The framework formed the implementation of methods to detect glioma tumour from CT and PET scans. Two deep learning pre-trained models: VGG19 and VGG19-BN were investigated and utilized to fuse CT and PET examinations results. Mask R-CNN (region-based convolutional neural network) was used for tumour detection – output of the model is bounding box coordinates for each object in the image – tumour. U-Net was used to perform semantic segmentation – segment malignant cells and tumour area. Transfer learning technique was used to increase the accuracy of models while having a limited collection of the dataset. Data augmentation methods were applied to generate and increase the number of training samples. The implemented framework can be utilized for other use-cases that combine object detection and area segmentation from grayscale and RGB images, especially to shape computer-aided diagnosis (CADx) and computer-aided detection (CADe) systems in the healthcare industry to facilitate and assist doctors and medical care providers.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.