Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 16

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  convolution neural network
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Digital twins is a digital replica of a physical object to observe its real-time performance, gather data, and recommend corrective actions if required to enhance its performance. This fascinating technological idea is now reaching the agriculture fields to transform farming, by creating digital twins of entire farms. This initiative presents an innovative strategy to enhance crop health and yield by creating a digital twin for paddy fields. The aim is to enable early detection of nutrient deficiencies and leaf blast disease, leading to a transformation in agriculture. Creating virtual replicas of plants and fields, the digital twin harnesses real-time data and advanced analytics to transform the way agricultural systems are managed. By integrating remote sensing, data analytics, and various Internet of Things devices like pH, nitrous, potassium, and phosphorus sensors, coupled with a gateway system, the digital twin provides real-time monitoring and analysis of crop health and nutrient levels. Employing advanced machine learning algorithms, notably Convolutional Neural Networks ensures precise and early detection of nutrient deficiencies and crop diseases. This ground-breaking technology provides timely alerts and actionable insights to farmers, enabling proactive decision-making for optimal crop management. This farmland digital twin represents a transformative approach towards agricultural sustainability and enhancing productivity.
PL
Cyfrowe bliźniaki to cyfrowa replika obiektu fizycznego, która umożliwia obserwację jego działania w czasie rzeczywistym, gromadzenie danych i rekomendowanie działań naprawczych, jeśli jest to wymagane w celu poprawy jego wydajności. Ta fascynująca koncepcja technologiczna dociera obecnie do dziedzin rolnictwa, aby przekształcić rolnictwo, tworząc cyfrowe bliźniaki całych gospodarstw. Inicjatywa ta przedstawia innowacyjną strategię mającą na celu poprawę zdrowia i plonów upraw poprzez stworzenie cyfrowego bliźniaka pól ryżowych. Celem jest umożliwienie wczesnego wykrywania niedoborów składników odżywczych i zarazy liści, co doprowadzi do transformacji rolnictwa. Tworząc wirtualne repliki roślin i pól, cyfrowy bliźniak wykorzystuje dane w czasie rzeczywistym i zaawansowane analizy, aby zmienić sposób zarządzania systemami rolniczymi. Dzięki integracji teledetekcji, analizy danych i różnych urządzeń Internetu rzeczy, takich jak czujniki pH, azotu, potasu i fosforu, w połączeniu z systemem bramek, cyfrowy bliźniak zapewnia monitorowanie i analizę stanu upraw i poziomów składników odżywczych w czasie rzeczywistym. Zastosowanie zaawansowanych algorytmów uczenia maszynowego, w szczególności konwolucyjnych sieci neuronowych, zapewnia precyzyjne i wczesne wykrywanie niedoborów składników odżywczych i chorób upraw. Ta przełomowa technologia zapewnia rolnikom aktualne alerty i przydatne informacje, umożliwiając proaktywne podejmowanie decyzji w celu optymalnego zarządzania uprawami. Ten cyfrowy bliźniak pól uprawnych reprezentuje transformacyjne podejście do zrównoważonego rozwoju rolnictwa i zwiększania produktywności.
EN
The chapter discusses the foundations for the system to verify and recognize the art style. Such a system seems to be interesting for the first step of the painting fraud identification and following the possible path of the different style influence for the final shape of the masterpiece. The approach presents image recognition using convolutional neural networks. These networks, due to their structure resembling the sight apparatus, and due to their efficiency in the case of two-dimensional data, are very often used for image recognition. Style recognition in art is currently a hot topic in machine learning circles. Courtesy of museums and galleries, there are now many databases available on the web that can be used in scientific work. The classes on which the network was to be tested are styles in history that are associated by the layman with the subject of art. This group includes Renaissance, Baroque, Romanticism, Neoclassicism, Surrealism, Cubism, Art Nouveau, Abstract Expressionism, Pop Art, and Impressionism. These classes were described in the work in terms of parameters that could affect the learning of the neural network. The networks were tested to determine the best parameters for identifying artistic styles. Networks with changing filter values, stride, and pooling parameters, and by selecting various additional layers were tested. The most important parameter was overfitting, which had to be prevented. As a result, networks peaked at 40% in Top1 and 80% in Top3. For smaller data, this result was optimistic for further research in recognizing other parameters, as well as using networks that were previously taught specific characteristics of styles, such as frequently used motifs or colors.
EN
The electrocardiogram (ECG) is a common test that measures the electrical activity of the heart. On the ECG, several cardiac abnormalities can be seen, including arrhythmias, which are one of the major causes of cardiac mortality worldwide. The objective for the research community is accurate and automated cardiovascular analysis, especially given the maturity of artificial intelligence technology and its contribution to the health area. The goal of this effort is to create an acquisition system and use artificial intelligence to classify ECG readings. This system is designed in two parts: the first is the signal acquisition using the ECG Module AD8232; the obtained signal is a single derivation that has been amplified and filtered. The second section is the classification for heart illness identification; the suggested model is a deep convolutional neural network with 12 layers that was able to categorize five types of heartbeats from the MIT-BIH arrhythmia database. The results were encouraging, and the embedded system was built.
PL
Elektrokardiogram (EKG) to powszechny test, który mierzy aktywność elektryczną serca. W zapisie EKG można zauważyć kilka nieprawidłowości serca, w tym arytmie, które są jedną z głównych przyczyn śmiertelności sercowej na całym świecie. Celem społeczności naukowej jest dokładna i zautomatyzowana analiza układu sercowo-naczyniowego, zwłaszcza biorąc pod uwagę dojrzałość technologii sztucznej inteligencji i jej wkład w obszar zdrowia. Celem tych wysiłków jest stworzenie systemu akwizycji i wykorzystanie sztucznej inteligencji do klasyfikacji odczytów EKG. System ten składa się z dwóch części: pierwsza to akwizycja sygnału za pomocą modułu EKG AD8232; uzyskany sygnał jest pojedynczą pochodną, która została wzmocniona i przefiltrowana. Druga sekcja to klasyfikacja identyfikacji chorób serca; sugerowany model to głęboka konwolucyjna sieć neuronowa z 12 warstwami, która była w stanie sklasyfikować pięć typów uderzeń serca z bazy danych arytmii MIT-BIH. Wyniki były zachęcające i zbudowano system wbudowany.
EN
Chronic obstructive pulmonary disease (COPD) is a complex and multi-component respiratory disease. Computed tomography (CT) images can characterize lesions in COPD patients, but the image intensity and morphology of lung components have not been fully exploited. Two datasets (Dataset 1 and 2) comprising a total of 561 subjects were obtained from two centers. A multiple instance learning (MIL) method is proposed for COPD identification. First, randomly selected slices (instances) from CT scans and multi-view 2D snapshots of the 3D airway tree and lung field extracted from CT images are acquired. Then, three attention-guided MIL models (slice-CT, snapshot-airway, and snapshot-lung-field models) are trained. In these models, a deep convolution neural network (CNN) is utilized for feature extraction. Finally, the outputs of the above three MIL models are combined using logistic regression to produce the final prediction. For Dataset 1, the accuracy of the slice-CT MIL model with 20 instances was 88.1%. The backbone of VGG-16 outperformed Alexnet, Resnet18, Resnet26, and Mobilenet_v2 in feature extraction. The snapshotairway and snapshot-lung-field MIL models achieved accuracies of 89.4% and 90.0%, respectively. After the three models were combined, the accuracy reached 95.8%. The proposed model outperformed several state-of-the-art methods and afforded an accuracy of 83.1% for the external dataset (Dataset 2). The proposed weakly supervised MIL method is feasible for COPD identification. The effective CNN module and attention-guided MIL pooling module contribute to performance enhancement. The morphology information of the airway and lung field is beneficial for identifying COPD.
EN
The paper is aimed to improve person re-identification accuracy in distributed video surveillance systems based on constructing a large joint image dataset of people for training convolutional neural networks (CNN). For this aim, an analysis of existing datasets is provided. Then, a new large joint dataset for person re-identification task is constructed that includes the existing public datasets CUHK02, CUHK03, Market, Duke, MSMT17 and PolReID. Testing for re-identification is performed for such frequently cited CNNs as ResNet-50, DenseNet121 and PCB. Re-identification accuracy is evaluated by using the main metrics Rank, mAP and mINP. The use of the new large joint dataset makes it possible to improve Rank1 mAP, mINP on all test sets. Re-ranking is used to further increase the re-identification accuracy. Presented results confirm the effectiveness of the proposed approach.
EN
Medical history highlights that myocardial infarction is one of the leading factors of death in human beings. Angina pectoris is a prominent vital sign of myocardial infarction. Medical reports suggest that experiencing chest pain during heart attacks causes changes in facial muscles, resulting in variations in patterns of facial expression. This work intends to develop an automatic facial expression detection to identify the severity of chest pain as a vital sign of MI, using an algorithmic approach that is implemented with a state-of-the-art convolutional neural network (CNN). The advanced object detection lightweight CNN models are as follows: Single Shot Detector Mobile Net V2, and Single Shot Detector Inception V2, which were utilized for designing the vital signs MI model from the 500 Red Blue Green Color images private dataset. The authors developed cardiac emergency health monitoring care using an Edge Artificial Intelligence (“Edge AI”) using NVIDIA’s Jetson Nano embedded GPU platform. The proposed model is mainly focused on the factors of low cost and less power consumption for onboard real-time detection of vital signs of myocardial infarction. The evaluated metrics achieve a mean Average Precision of 85.18%, Average Recall of 88.32%, and 6.85 frames per second for the generated detections.
EN
We present vehicle detection classification using the Convolution Neural Network (CNN) of the deep learning approach. The automatic vehicle classification for traffic surveillance video systems is challenging for the Intelligent Transportation System (ITS) to build a smart city. In this article, three different vehicles: bike, car and truck classification are considered for around 3,000 bikes, 6,000 cars, and 2,000 images of trucks. CNN can automatically absorb and extract different vehicle dataset’s different features without a manual selection of features. The accuracy of CNN is measured in terms of the confidence values of the detected object. The highest confidence value is about 0.99 in the case of the bike category vehicle classification. The automatic vehicle classification supports building an electronic toll collection system and identifying emergency vehicles in the traffic.
EN
In the domain of affective computing different emotional expressions play an important role. To convey the emotional state of human emotions, facial expressions or visual cues are used as an important and primary cue. The facial expressions convey humans affective state more convincingly than any other cues. With the advancement in the deep learning techniques, the convolutional neural network (CNN) can be used to automatically extract the features from the visual cues; however variable sized and biased datasets are a vital challenge to be dealt with as far as implementation of deep models is concerned. Also, the dataset used for training the model plays a significant role in the retrieved results. In this paper, we have proposed a multi-model hybrid ensemble weighted adaptive approach with decision level fusion for personalized affect recognition based on the visual cues. We have used a CNN and pre-trained ResNet-50 model for the transfer learning. VGGFace model’s weights are used to initialize weights of ResNet50 for fine-tuning the model. The proposed system shows significant improvement in test accuracy in affective state recognition compared to the singleton CNN model developed from scratch or transfer learned model. The proposed methodology is validated on The Karolinska Directed Emotional Faces (KDEF) dataset with 77.85% accuracy. The obtained results are promising compared to the existing state of the art methods.
EN
Objective: The purpose of present review paper is to introduce the reader to key directions of manual, semi-automatic and automatic knee osteoarthritis (OA) severity classification from plain radiographs. This is a narrative review article in which we have described recent developments in severity evaluation of knee OA from X-ray images. We have primarily focussed on automatic analysis and have reviewed articles in which machine learning, transfer learning, active learning, etc. have been employed on X-ray images to access and classify the severity of knee OA. Methods: All original research articles on OA detection and classification using X-ray images published in English were searched on PubMed database, Google Scholar, RSNA radiology databases in year 2019. The search terms of ‘‘knee Osteoarthritis” were combined with search terms ‘‘Machine Learning”, ‘severity” and ‘‘X-ray”. Results: The initial search on various publication databases revealed a total of 743 results, out of which only 26 articles were considered relevant to radiographic knee OA severity analysis. The majority of the articles were based on automatic analysis. Manual segmentation based articles were least in numbers. Conclusion: Computer aided methods to diagnose knee OA are great tools to detect OA at ealry stages. Advancements in Human Computer Interface systems have led the researchers to bridge the gap between machine learning algorithms and expert healthcare professionals to provide better and timely treatment options to the knee OA affected patients.
10
Content available remote Endoscopy image retrieval by Mixer Multi-Layer Perceptron
EN
In Computer Vision, the Image Retrieval task is one of the interests of researchers, particularly medical image retrieval and endoscopy images. With the development of the Convolution Neural Network and Vision Transformer Technique, there are many proposals for using these techniques to make Image Retrieval Task and achieve a competitive result. In this paper, we propose a method that using Mixer Multi-Layer Perceptron architecture (Mixer-MLP) to build an Image Retrieval System with Medical images, particularly Endoscopic Images. This System base on the Classification process of Mixer-MLP architecture to generate vector representation for similarity cal- culation. The research result achieves competitively with efficient training time.
EN
For automatic sleep stage classification, the existing methods mostly rely on hand-crafted features selected from polysomnographic records. In this paper, the goal is to develop a deep learning-based method by using single channel electroencephalogram (EEG) that automatically exploits the time–frequency spectrum of EEG signal, removing the need for manual feature extraction. The time–frequency RGB color images for EEG signal are extracted using continuous wavelet transform (CWT). The transfer learning of a pre-trained convolution neural network, squeezenet is employed to classify these CWT images into its sleep stages. The proposed method is evaluated using a publicly available Physionet sleep EDFx dataset using single-channel EEG Fpz-Cz channel. Evaluation results show that this method can achieve near state of the art accuracy even using single channel EEG signal.
EN
The printed character recognition is an efficient and automatic method for inputting information to a computer nowadays that is used to translate the printed or handwritten images into an editable and readable text file. This paper aims to recognize a multifont and multisize of the English language printed word for a smart pharmacy purpose. The recognition system has been based on a convolution neural network (CNN) approach where line, word, and character are separately corrected, and then each of the separated characters is fed into the CNN algorithm for recognition purposes. The OpenCV open-source library has been used for preprocessing, which can segment English characters accurately and efficiently, and for recognition, the Keras library with the backend of TensorFlow has been used. The training and testing data sets have been designed to include 23 different fonts with six different sizes. The CNN algorithm achieves the highest accuracy of 96.6% comparing to the other state-of-the-art machine learning methods. The higher classification accuracy of the CNN approach shows that this type of algorithm is ideal for the English language printed word recognition. The highest error rate after testing the system using English electronic prescribing written with all proposed font-types is 0.23% in Georgia font.
13
EN
The existence and distribution pattern of cerebral microbleeds (CMBs) are associated with some underlying aetiologies caused by intra-cerebral hemorrhage (ICH). CMBs as a kind of subclinical sign can be recognized via magnetic resonance (MR) imaging technique in a few years before the onset of the disease. Hence, detecting CMBs accurately is important for treating and preventing related cerebral disease. In this study, we employed convolution neural network (CNN) for CMBs detection because of its powerful ability in image recognition. In view of too many efforts on optimizing the structure of CNN for achieving a better performance, we introduced center loss, which can greatly enhance the discriminative power of the deeply learned features, to CMBs detection for the first time. It is found that the performances of convolution neural network (CNN) trained under the joint supervision of softmax loss and center loss were significantly better than that under the supervision of softmax loss, even if there are few mislabelled samples in training data. With this trick, we achieved a high performance with a sensitivity of 98.869 ± 1.026%, a specificity of 96.491 ± 0.367%, and an accuracy of 97.681 ± 0.497%, which is better than four state-of-the-art methods.
EN
Among the predominant cancers, breast cancer is one of the main causes of cancer deaths impacting women worldwide. However, breast cancer classification is challenging due to numerous morphological and textural variations that appeared in intra-class images. Also, the direct processing of high resolution histological images is uneconomical in terms of GPU memory. In the present study, we have proposed a new approach for breast histopathological image classification that uses a deep convolution neural network (CNN) with wavelet decomposed images. The original microscopic image patches of 2048 1536 3 pixels are decomposed into 512 384 3 using 2-level Haar wavelet and subsequently used in proposed CNN model. The image decomposition step considerably reduces convolution time in deep CNNs and computational resources, without any performance downgrade. The CNN model extracts the deep features from Haar wavelet decomposed images and incorporates multi-scale discriminant features for precise prognostication of class labels. This paper also solves the demand for massive histopathology dataset by means of transfer learning and data augmentation techniques. The efficacy of proposed approach is corroborated on two publicly available breast histology datasets-(a) one provided as a part of international conference on image analysis and recognition (ICIAR 2018) grand challenge and (b) on BreakHis data. On the ICIAR 2018 validation data, our model showed an accuracy of 98.2% for both 4-class and 2-class recognition. Further, on hidden test data of the ICIAR 2018, we achieved an accuracy of 91%, outperforming existing state-of-the-art results significantly. Besides, on BreakHis dataset, the model achieved competing performance with 96.85% multi-class accuracy.
EN
Manual analysis of brain tumors magnetic resonance images is usually accompanied by some problem. Several techniques have been proposed for the brain tumor segmentation. This study will be focused on searching popular databases for related studies, theoretical and practical aspects of Convolutional Neural Network surveyed in brain tumor segmentation. Based on our findings, details about related studies including the datasets used, evaluation parameters, preferred architectures and complementary steps analyzed. Deep learning as a revolutionary idea in image processing, achieved brilliant results in brain tumor segmentation too. This can be continuing until the next revolutionary idea emerging.
EN
This paper presents a structural design of the hardware-efficient module for implementation of convolution neural network (CNN) basic operation with reduced implementation complexity. For this purpose we utilize some modification of the Winograd’s minimal filtering method as well as computation vectorization principles. This module calculate inner products of two consecutive segments of the original data sequence, formed by a sliding window of length 3, with the elements of a filter impulse response. The fully parallel structure of the module for calculating these two inner products, based on the implementation of a naïve method of calculation, requires 6 binary multipliers and 4 binary adders. The use of the Winograd’s minimal filtering method allows to construct a module structure that requires only 4 binary multipliers and 8 binary adders. Since a high-performance convolutional neural network can contain tens or even hundreds of such modules, such a reduction can have a significant effect.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.