Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 10

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  ML
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
PL
Przedsiębiorstwa i organizacje przetwarzają ogromne ilości dokumentacji papierowej, co angażuje pracowników do żmudnej i błędogennej pracy. Artykuł opisuje techniki, które można zastosować, aby zautomatyzować ten proces w celu wydobywania istotnych informacji z dokumentów takich jak podmiot i przedmiot umowy, terminy i daty, lokalizacja, dane techniczne obiektów i inne, specyficzne dla danego typu dokumentu. System iDoc stosuje elementy sztucznej inteligencji w rozpoznawaniu treści dokumentów, pozwala osiągnąć 10-krotne przyspieszenie przetwarzania przy zachowaniu wysokiej dokładności, a także umożliwia ręczną weryfikację danych.
EN
In today's business landscape, companies and organizations grapple with processing extensive volumes of paper documents, burdening their employees with tedious and error-prone tasks. This article presents innovative techniques for automating this process by efficiently extracting critical information from various documents, including contract subjects and objects, dates, deadlines, locations, technical data about devices, and other specific contents pertaining to distinct document types. Leveraging Artificial Intelligence, the iDoc system identifies document contents, enabling users to process data ten times faster while maintaining a high level of accuracy. By adopting iDoc, manual data processing becomes obsolete, while still allowing users to validate extracted information.
EN
This article aims to introduce the terms NI-Natural Intelligence, AI-Artificial Intelligence, ML-Machine Learning, DL-Deep Learning, ES-Expert Systems and etc. used by modern digital world to mining and mineral processing and to show the main differences between them. As well known, each scientific and technological step in mineral industry creates huge amount of raw data and there is a serious necessity to firstly classify them. Afterwards experts should find alternative solutions in order to get optimal results by using those parameters and relations between them using special simulation software platforms. Development of these simulation models for such complex operations is not only time consuming and lacks real time applicability but also requires integration of multiple software platforms, intensive process knowledge and extensive model validation. An example case study is also demonstrated and the results are discussed within the article covering the main inferences, comments and decision during NI use for the experimental parameters used in a flotation related postgraduate study and compares with possible AI use.
EN
Remote sensing satellite images are affected by different types of degradation, which poses an obstacle for remote sensing researchers to ensure a continuous and trouble-free observation of our space. This degradation can reduce the quality of information and its effect on the reliability of remote sensing research. To overcome this phenomenon, the methods of detecting and eliminating this degradation are used, which are the subject of our study. The original aim of this paper is that it proposes a state of art of recent decade (2012-2022) on advances in remote sensing image restoration using machine and deep learning, identified by this survey, including the databases used, the different categories of degradation, as well as the corresponding methods. Machine learning and deep learning based strategies for remote sensing satellite image restoration are recommended to achieve satisfactory improvements.
EN
Technology is rising on daily basis with the advancement in web and artificial intelligence (AI), and big data developed by machines in various industries. All of these provide a gateway for cybercrimes that makes network security a challenging task. There are too many challenges in the development of NID systems. Computer systems are becoming increasingly vulnerable to attack as a result of the rise in cybercrimes, the availability of vast amounts of data on the internet, and increased network connection. This is because creating a system with no vulnerability is not theoretically possible. In the previous studies, various approaches have been developed for the said issue each with its strengths and weaknesses. However, still there is a need for minimal variance and improved accuracy. To this end, this study proposes an ensemble model for the said issue. This model is based on Bagging with J48 Decision Tree. The proposed models outperform other employed models in terms of improving accuracy. The outcomes are assessed via accuracy, recall, precision, and f-measure. The overall average accuracy achieved by the proposed model is 83.73%.
PL
W pracy przedstawiono algorytm predykcji wolnych zasobów w sieciach radiowych 5G. Sygnał 5G nadawany przez użytkownika pierwotnego (PU) poddawany jest zanikom występującym w kanale, co uniemożliwia poprawną detekcję i tym samym właściwą ochronę transmisji PU. Zaproponowany algorytm wykorzystuje możliwości głębokiego uczenia maszynowego w celu rozpoznania zależności czasowo-częstotliwościowych występujących w odebranym sygnale, a także rozpoznania stopnia zaniku. Znając te informacje, algorytm dokonuje lepszej detekcji wolnych zasobów, przy jednoczesnej ochronie transmisji PU.
EN
In this paper, we present a 5G spectrum resources prediction algorithm. 5G signal, transmitted by the primary user (PU) is transmitted through fading channel, which makes negatively affects prediction performance and proper protection of PU’s transmission. The proposed algorithm applies deep learning for estimating fading level and recognizing time-frequency patterns in a received signal. Having this information, the algorithm can perform better signal prediction and PU’s transmission protection.
EN
Urban land-cover change is increasing dramatically in most emerging countries. In Iraq and in the capital city (Baghdad). Active socioeconomic progress and political stability have pushed the urban border into the countryside at the cost of natural ecosystems at ever- growing rates. Widely used classifier of Maximum Likelihood was used for classification of 2003 and 2021 Landsat images. This classifier achieved 83.20% and 99.58% overall accuracies for 2003 and 2021 scenes, respectively. This study found that the urban area decreases by 16.4% and the agriculture area decrease by 5.4% over the period. On the other hand, barren land has been expanded up to more than 7% as well as increasing in water land that should probably due to flooding (almost 15% more than 2003). To reduce the undesirable effects of land-cover changes over urban ecosystems in Baghdad and in the municipality in specific, it is suggested that Baghdad develops an urban development policy. The emphasis of policy must be the maintenance an acceptable balance among urban infrastructure development, ecological sustainability and agricultural production.
7
Content available Analiza wydajności bibliotek uczenia maszynowego
PL
W artykule zaprezentowane zostały wyniki analizy wydajności bibliotek uczenia maszynowego. Badania oparte zostały na narzędziach ML.NET i TensorFlow. Przeprowadzona analiza bazowała na porównaniu czasu działania bibliotek podczas wykrywania obiektów na zbiorach zdjęć, przy użyciu sprzętu o różnych parametrach. Biblioteką, zużywającą mniejsze zasoby sprzętowe, okazała się TensorFlow. Nie bez znaczenia okazał się wybór platformy sprzętowej oraz możliwość użycia rdzeni graficznych, mających wpływ na zwiększenie wydajności obliczeń.
EN
The paper presents results of performance analysis of machine learning libraries. The research was based on ML.NET and TensorFlow tools. The analysis was based on a comparison of running time of the libraries, during detection of objects on sets of images, using hardware with different parameters. The library, consuming fewer hardware resources, turned out to be TensorFlow. The choice of hardware platform and the possibility of using graphic cores, affecting the increase in computational efficiency, turned out to be not without significance.
EN
Context: Predicting the priority of bug reports is an important activity in software maintenance. Bug priority refers to the order in which a bug or defect should be resolved. A huge number of bug reports are submitted every day. Manual filtering of bug reports and assigning priority to each report is a heavy process, which requires time, resources, and expertise. In many cases mistakes happen when priority is assigned manually, which prevents the developers from finishing their tasks, fixing bugs, and improve the quality. Objective: Bugs are widespread and there is a noticeable increase in the number of bug reports that are submitted by the users and teams’ members with the presence of limited resources, which raises the fact that there is a need for a model that focuses on detecting the priority of bug reports, and allows developers to find the highest priority bug reports. This paper presents a model that focuses on predicting and assigning a priority level (high or low) for each bug report. Method: This model considers a set of factors (indicators) such as component name, summary, assignee, and reporter that possibly affect the priority level of a bug report. The factors are extracted as features from a dataset built using bug reports that are taken from closed-source projects stored in the JIRA bug tracking system, which are used then to train and test the framework. Also, this work presents a tool that helps developers to assign a priority level for the bug report automatically and based on the LSTM’s model prediction. Results: Our experiments consisted of applying a 5-layer deep learning RNN-LSTM neural network and comparing the results with Support Vector Machine (SVM) and K-nearest neighbors (KNN) to predict the priority of bug reports. The performance of the proposed RNN-LSTM model has been analyzed over the JIRA dataset with more than 2000 bug reports. The proposed model has been found 90% accurate in comparison with KNN (74%) and SVM (87%). On average, RNN-LSTM improves the F-measure by 3% compared to SVM and 15.2% compared to KNN. Conclusion: It concluded that LSTM predicts and assigns the priority of the bug more accurately and effectively than the other ML algorithms (KNN and SVM). LSTM significantly improves the average F-measure in comparison to the other classifiers. The study showed that LSTM reported the best performance results based on all performance measures (Accuracy = 0.908, AUC = 0.95, F-measure = 0.892).
EN
The breadth first signal decoder (BSIDE) is well known for its optimal maximum likelihood (ML) performance with lesser complexity. In this paper, we analyze a multiple-input multiple-output (MIMO) detection scheme that combines; column norm based ordering minimum mean square error (MMSE) and BSIDE detection methods. The investigation is carried out with a breadth first tree traversal technique, where the computational complexity encountered at the lower layers of the tree is high. This can be eliminated by carrying detection in the lower half of the tree structure using MMSE and upper half using BSIDE, after rearranging the column of the channel using norm calculation. The simulation results show that this approach achieves 22% of complexity reduction for 2x2 and 50% for 4x4 MIMO systems without any degradation in the performance.
10
Content available remote Can Machine Learning Learn a Decision Oracle for NP Problems? A Test on SAT
EN
This note describes our experiments aiming to empirically test the ability of machine learning models to act as decision oracles for NP problems. Focusing on satisfiability testing problems, we have generated random 3-SAT instances and found out that the correct branch prediction accuracy reached levels in excess of 99%. The branching in a simple backtracking-based SAT solver has been reduced in more than 90% of the tested cases, and the average number of branching steps has reduced to between 1/5 and 1/3 of the one without the machine learning model. The percentage of SAT instances where the machine learned heuristic-enhanced algorithm solved SAT in a single pass reached levels of 80-90%, depending on the set of features used.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.