Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 7

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  convolutional network
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
1
EN
The segmentation of the liver and liver tumors is critical in the diagnosis of liver cancer, and the high mortality rate of liver cancer has made it one of the most popular areas for segmentation research. Some deep learning segmentation methods outperformed traditional methods in terms of segmentation results. However, they are unable to obtain satisfactory segmentation results due to blurred original image boundaries, the presence of noise, very small lesion sites, and other factors. In this paper, we propose MDCF_Net, which has dual encoding branches composed of CNN and CnnFormer and can fully utilize multidimensional image features. First, it extracts both intra-slice and inter-slice information and improves the accuracy of the network output by symmetrically using multidimensional fusion layers. In the meantime, we propose a novel feature map stacking approach that focuses on the correlation of adjacent channels of two feature maps, improving the network’s ability to perceive 3D features. Furthermore, the two coding branches collaborate to obtain both texture and edge features, and the network segmentation performance is further improved. Extensive experiments were carried out on the public datasets LiTS to determine the optimal slice thickness for this task. The superiority of the segmentation performance of our proposed MDCF_Net was confirmed by comparison with other leading methods on two public datasets, the LiTS and the 3DIRCADb.
EN
Rail corrugation is a significant problem not only in heavy-haul freight but also in light rail systems. Over the last years, considerable progress has been made in understanding, measuring and treating corrugation problems also considered a matter of safety. In the presented research, convolutional neural networks (CNNs) are used to identify the occurrence of rail corrugation in light rail systems. The paper shows that by simultaneously measuring the vibration and the sound pressure, it is possible to identify the rail corrugation with a very small error.
EN
Contemporary operation-related requirements for combustion engines force the necessity of ongoing assessment of their in operation technical condition (e.g. marine engines). The engine efficiency and durability depend on a variety of parameters. One of them is valve clearance. As has been proven in the paper, the assessment of the valve clearance can be based on vibration signals, which is not a problem in terms of signal measurement and processing and is not invasive into the engine structure. The authors described the experimental research aiming at providing information necessary to develop and validate the proposed method. Active experiments were used with the task of valve clearance and registration of vibrations using a three-axis transducer placed on the engine cylinder head. The tests were carried out during various operating conditions of the engine set by 5 rotational speeds and 5 load conditions. In order to extract the training examples, fragments of the signal related to the closing of individual valves were divided into 11 shorter portions. From each of them, an effective value of the signal was determined. Obtained total 32054 training vectors for each valve related to 4 classes of valve clearance including very sensitive clearance above 0.8 mm associated with high dynamic interactions in cylinder head. In the paper, the authors propose to use a convolutional network CNN to assess the correct engine valve clearance. The obtained results were compared with other methods of machine learning (pattern recognition network, random forest). Finally, using CNN the valve clearance class identification error was less than 1% for the intake valve and less than 3.5% for the exhaust valve. Developed method replaces the existing standard methods based on FFT and STFT combined with regression calculation where approximation error is up to 10%. Such results are more useful for further studies related not only to classification, but also to the prediction of the valve clearance condition in real engine operations.
EN
Voice acoustic analysis can be a valuable and objective tool supporting the diagnosis of many neurodegenerative diseases, especially in times of distant medical examination during the pandemic. The article compares the application of selected signal processing methods and machine learning algorithms for the taxonomy of acquired speech signals representing the vowel a with prolonged phonation in patients with Parkinson’s disease and healthy subjects. The study was conducted using three different feature engineering techniques for the generation of speech signal features as well as the deep learning approach based on the processing of images involving spectrograms of different time and frequency resolutions. The research utilized real recordings acquired in the Department of Neurology at the Medical University of Warsaw, Poland. The discriminatory ability of feature vectors was evaluated using the SVM technique. The spectrograms were processed by the popular AlexNet convolutional neural network adopted to the binary classification task according to the strategy of transfer learning. The results of numerical experiments have shown different efficiencies of the examined approaches; however, the sensitivity of the best test based on the selected features proposed with respect to biological grounds of voice articulation reached the value of 97% with the specificity no worse than 93%. The results could be further slightly improved thanks to the combination of the selected deep learning and feature engineering algorithms in one stacked ensemble model.
PL
W artykule przedstawiono wyniki oryginalnych badań nad zastosowaniem sieci neuronowej wykorzystującej techniki głębokiego uczenia w zadaniu identyfikacji tożsamości na podstawie obrazów twarzy zarejestrowanych w zakresie widzialnym i w podczerwieni. W badaniach użyte zostały obrazy twarzy eksponowanych w zmiennych ale kontrolowanych warunkach. Na podstawie uzyskanych wyników można stwierdzić, że oba badane zakresy spektralne dostarczają istotnych ale różnych informacji o tożsamości danej osoby, które się wzajemnie uzupełniają.
EN
The paper presents the results of the original research on the application of a neural network using deep learning techniques in the task of identity recognition on the basis of facial images acquired in both visual and thermal radiation ranges. In the research, the database containing images acquired in various but controlled conditions was used. On the basis of the obtained results it can be established that both investigated spectral ranges provide distinctive and complementary details about the identity of an examined person.
EN
Artificial intelligence has made big steps forward with reinforcement learning (RL) in the last century, and with the advent of deep learning (DL) in the 90s, especially, the breakthrough of convolutional networks in computer vision field. The adoption of DL neural networks in RL, in the first decade of the 21 century, led to an end-toend framework allowing a great advance in human-level agents and autonomous systems, called deep reinforcement learning (DRL). In this paper, we will go through the development Timeline of RL and DL technologies, describing the main improvements made in both fields. Then, we will dive into DRL and have an overview of the state-ofthe- art of this new and promising field, by browsing a set of algorithms (Value optimization, Policy optimization and Actor-Critic), then, giving an outline of current challenges and real-world applications, along with the hardware and frameworks used. In the end, we will discuss some potential research directions in the field of deep RL, for which we have great expectations that will lead to a real human level of intelligence.
7
Content available Fast multispectral deep fusion networks
EN
Most current state-of-the-art computer vision algorithms use images captured by cameras, which operate in the visible spectral range as input data. Thus, image recognition systems that build on top of those algorithms can not provide acceptable recognition quality in poor lighting conditions, e.g. during nighttime. Another significant limitation of such systems is high demand for computational resources, which makes them impossible to use on low-powered embedded systems without GPU support. This work attempts to create an algorithm for pattern recognition that will consolidate data from visible and infrared spectral ranges and allow near real-time performance on embedded systems with infrared and visible sensors. First, we analyze existing methods of combining data from different spectral ranges for object detection task. Based on the analysis, an architecture of a deep convolutional neural network is proposed for the fusion of multi-spectral data. This architecture is based on the single shot multi-box detection algorithm. Comparison analysis of the proposed architecture with previously proposed solutions for the multi-spectral object detection task shows comparable or better detection accuracy with previous algorithms and significant improvement of the running time on embedded systems. This study was conducted in collaboration with Philips Lighting Research Lab and solutions based on the proposed architecture will be used in image recognition systems for the next generation of intelligent lighting systems. Thus, the main scientific outcomes of this work include an algorithm for multi-spectral pattern recognition based on convolutional neural networks, as well as a modification of detection algorithms for working on embedded systems.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.