Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 639

Liczba wyników na stronie
first rewind previous Strona / 32 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  classification
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 32 next fast forward last
PL
W artykule dokonano analizy czynników agresywnych w stosunku do betonu w zamkniętych obiektach gospodarki ściekowej na przykładzie czterech różnych grup obiektów zlokalizowanych w różnych regionach Polski. Na podstawie analiz chemicznych ścieków, ich osadów i skroplin na ścianach i stropach oraz atmosfery ponad ściekami, a także oceny stanu powierzchniowych warstw betonu i zbrojenia, sformułowano wnioski dotyczące agresywności środowiska panującego w komorach oraz wytycznych do projektowania ochrony przeciwkorozyjnej takich obiektów.
EN
In the paper the analysis of agents aggressive to concrete in the covered reinforced concrete chambers and channels for sewage disposal. The examples of four different objects localized in different part of Poland are presented. On the basis of chemical tests of sewage, their sludge and condensate on the walls and covers, the air above the sewage as well as the assessment of the surface of concrete, the conclusions are formulated concerning the aggressiveness in such objects and the recommendations for their anticorrosive protection.
EN
This paper presents the general principle of operation of delayed blowback small arms, their classification by the applied blowback delay, and a discussion of the existing designs. An analysis was carried out to rate the specific design solutions. The results of this work will be used in further investigations into and testing of delayed blowback firearms.
PL
W artykule przedstawiono ogólną zasadę działania broni strzeleckiej wykorzystującej odrzut zamka półswobodnego, dokonano jej podziału w zależności od rodzaju zastosowanego opóźnienia, a także omówiono występujące konstrukcje. Na podstawie przeprowadzonej analizy dokonano oceny rozwiązań. Wyniki posłużą do dalszych rozważań i badań broni z zamkiem półswobodnym.
PL
Celem pracy jest ocena jakości modelu oparta na Maszynie Wektorów Nośnych SVM pod kątem jej przydatności w wirtualnym uruchomieniu - do zastosowania na potrzeby wirtualnego bliźniaka. Przedstawione wyniki badań są ściśle skorelowane z Przemysłem 4.0, którego główną ideą jest integracja inteligentnych maszyn, systemów i informatyki. Jednym z celów jest wprowadzenie możliwości elastycznej zmiany asortymentu oraz zmian w systemach produkcyjnych. Wirtualne uruchomienie może zostać użyte do stworzenia modelu symulacyjnego obiektu, na potrzeby szkolenia operatorów. Jednym z działów wirtualnego rozruchu jest cyfrowy bliźniak. Jest to wirtualna reprezentacja instalacji lub urządzenia, czy też maszyny. Dzięki zastosowaniu wirtualnego bliźniaka, możliwe jest odwzorowanie różnych procesów w celu obniżenia kosztów procesu i przyspieszenia procesu testowania. W pracy zaproponowano współczynnik oceny jakości modelu oparty na SVM. Współczynnik ten bierze pod uwagę wiedzę ekspercką oraz metody używane do oceny jakości modelu - Znormalizowany Błąd Średniokwadratowy NRMSE (ang. Normalized Root Mean Square Error) oraz Znormalizowany Maksymalny Błąd ME (ang. Maximum Error). Wspomniane metody są powszechnie stosowane do oceny jakości modelu, jednak dotychczas nie były używane równocześnie. W każdej z metod uwzględniany jest inny aspekt dotyczący modelu. Zaproponowany współczynnik umożliwia podjęcie decyzji, czy dany model może zostać użyty do stworzenia wirtualnego bliźniaka. Takie podejście pozwala na testowanie modeli w sposób automatyczny lub półautomatyczny.
EN
This paper proposes a model quality assessment method based on Support Vector Machine, which can be used to develop a digital twin. This work is strongly connected with Industry 4.0, in which the main idea is to integrate machines, devices, systems, and IT. One of the goals of Industry 4.0 is to introduce flexible assortment changes. Virtual commissioning can be used to create a simulation model of a plant or conduct training for maintenance engineers. One branch of virtual commissioning is a digital twin. The digital twin is a virtual representation of a plant or a device. Thanks to the digital twin, different scenarios can be analyzed to make the testing process less complicated and less time-consuming. The goal of this work is to propose a coefficient that will take into account expert knowledge and methods used for model quality assessment (such as Normalized Root Mean Square Error - NRMSE, Maximum Error - ME). NRMSE and ME methods are commonly used for this purpose, but they have not been used simultaneously so far. Each of them takes into consideration another aspect of a model. The coefficient allows deciding whether the model can be used for digital twin appliances. Such an attitude introduces the ability to test models automatically or in a semi-automatic way.
EN
The leaf is one of the plant organs, contains chlorophyll, and functions as a catcher of energy from sunlight which is used for photosynthesis. Perfect leaves are composed of three parts, namely midrib, stalk, and leaf blade. The way to identify the type of plant is to look at the shape of the leaf edges. The shape, color, and texture of a plant's leaf margins may influence its leaf veins, which in this vein morphology carry information useful for plant classification when shape, color, and texture are not noticeable. Humans, on the other hand, may fail to recognize this feature because they prefer to see plants solely based on leaf form rather than leaf margins and veins. This research uses the Wavelet method to denoise existing images in the dataset and the Convolutional Neural Network classifies through images. The results obtained using the Wavelet Convolutional Neural Network method are equal to 97.13%.
7
EN
The aim of the paper is to present how some of the data mining tasks can be solved using the R programming language. The full R scripts are provided for preparing data sets, solving the tasks and analyzing the results.
EN
Purpose: The objective of the study is to use selected data mining techniques to discover patterns of certain recurring mechanisms related to the occurrence of occupational accidents in relation to production processes. Design/methodology/approach: The latent class analysis (LCA) method was employed in the investigation. This statistical modeling technique enables discovering mutually exclusive homogenous classes of objects in a multivariate data set on the basis of observable qualitative variables, defining the class homogeneity in terms of probabilities. Due to a bilateral agreement, Statistics Poland provided individual record-level real data for the research. Then the data were preprocessed to enable the LCA model identification. Pilot studies were conducted in relation to occupational accidents registered in production plants in 2008-2017 in the Wielkopolskie voivodeship. Findings: Three severe accident patterns and two light accident patterns represented by latent classes were obtained. The classes were subjected to descriptive characteristics and labeling, using interpretable results presented in the form of probabilities classifying categories of observable variables, symptomatic for a given latent class. Research limitations/implications: The results from the pilot studies indicate the necessity to continue the research based on a larger data set along with the analysis development, particularly as regards selecting indicators for the latent class model characterization. Practical implications: The identification of occupational accident patterns related to the production process can play a vital role in the elaboration of efficient safety countermeasures that can help to improve the prevention and outcome mitigation of such accidents among workers. Social implications: Creating a safe work environment comprises the quality of life of workers, their families, thus affirming the enterprises' principles and values in the area of corporate social responsibility. Originality/value: The investigation showed that latent class analysis is a promising tool supporting the scientific research in discovering the patterns of occupational accidents. The proposed investigation approach indicates the importance for the research both in terms of the availability of non-aggregated occupational accident data as well as the type of value aggregation of the variables taken for the analysis.
PL
Artykuł przedstawia zarys problemu antropogenicznych anomalii grawitacyjnych. W odróżnieniu od innych prac, koncentrujących się głównie na anomaliach pochodzenia górniczego, podjęto w nim próbę całościowego ujęcia tematu. W początkowej części artykułu zwrócono uwagę na problemy związane z wyznaczeniem przyspieszenia siły ciężkości oraz znaczenie tej wielkości dla konstrukcji geoidy i dokładności pomiarów geodezyjnych. Zdefiniowano antropogeniczną anomalię grawitacyjną i podano opracowaną przez autora systematyczną klasyfikację jej źródeł. Sformułowano uproszczone modele obliczeniowe dla charakterystycznych typów źródeł. Używając tych modeli, oszacowano wartości anomalii grawitacyjnych, powodowanych przez wymienione w klasyfikacji, reprezentatywne przykłady źródeł i przedyskutowano ich znaczenie.
EN
This article presents the outline of the anthropogenic gravity anomalies problem. In contrast to other papers concerning mining-related anomalies, here a holistic approach to the subject was attempted. In the introductory part of the article, the attention is given to the problems with determination of the gravity acceleration, and the importance of this estimation to the geoid construction and accuracy of measurements in land surveying. An anthropogenic gravity anomaly was defined and a systematic classification of its sources was developed and given by the author. Simplified computational models for the characteristic source types were formulated. Using those models, the values of gravity anomalies were estimated. The anomalies are caused by enumerated in the classification representative examples of sources and their significance was discussed.
10
Content available remote CNN application in face recognition
EN
The paper presents application of the convolutional neural network (CNN) in face recognition. The CNN is regarded nowadays as the most efficient tool in image analysis. This technique was applied to recognition of two databases of faces: the own base containing 68 classes of very different variants of face composition (grey images) and 244 classes of color face images represented as RGB images (MUCT data base). This paper will compare different solutions of classifiers applied in CNN, autoencoder and the traditional approach relying on classical feature generation methods and application of support vector machine classifier. The numerical results of experiments performed on the face image database will be presented and discussed.
PL
Praca przedstawia zastosowanie sieci CNN w rozpoznaniu obrazów twarzy. Twarze poddane eksperymentom pochodzą z dwu baz danych. Jedna z nich jest własną bazą zawierającą 68 klas reprezentowanych w postaci obrazów w skali szarości i drugą (MUCT) zawierającą 244 klasy reprezentujące obrazy kolorowe RGB. Zbadano i porównano różne metody rozpoznania obrazów. Jedna z nich polega na zastosowaniu konwolucyjnej sieci neuronowej CNN z dwoma różnymi klasyfikatorami końcowymi (softmax i SVM). Inne głębokie podejście stosuje autoenkoder do generacji cech i SVM jako klasyfikator. Wyniki porównano z klasycznym podejściem wykorzystującym transformację PCA w połączeniu z klasyfikatorem SVM.
EN
In the article, an approach based on clustering is proposed, according to which the influence of an individual model is inversely proportional to the volumes of aggregated groups. With this approach, the influence of an individual solution of the model, which differs from others, is significantly increased. Aggregation of groups is made in direct proportion to the correlation of decisions. Moreover, the aggregation of groups of models is performed according to the hierarchical structure of the ensemble. The solutions of strongly correlated groups of models are replaced by a single cluster solution. This solution at the next level can be grouped with other closest groups of models. Due to this architecture, the level of influence of a single solution of the model is increased. The main advantage of the proposed approach is the determination of the structure of the ensemble depending on the correlation of model decisions. Clusterization of decisions for features of similarity enhances the role of diversity and allows levelling out the error of an individual decision at a local level and to provide acceptable global indicators of cluster efficiency. Advantage of the proposed approach is the possibility of building an ensemble based on the properties of the correlation parameters.
PL
W artykule zaproponowano podejście oparte na grupowaniu, zgodnie z którym wpływ modelu indywidualnego jest odwrotnie proporcjonalny do wielkości grup zagregowanych. Dzięki takiemu podejściu wpływ indywidualnego rozwiązania modelu, różniącego się od innych, jest znacząco zwiększony. Agregacja grup jest dokonywana w sposób wprost proporcjonalny do korelacji decyzji. Ponadto agregacja grup modeli odbywa się zgodnie z hierarchiczną strukturą zespołu. Rozwiązania silnie skorelowanych grup modeli są zastępowane przez jedno rozwiązanie klastrowe. Rozwiązanie to na kolejnym poziomie może być grupowane z innymi najbliższymi grupami modeli. Ze względu na taką architekturę zwiększa się poziom wpływu pojedynczego rozwiązania modelu. Główną zaletą proponowanego podejścia jest określenie struktury zespołu w zależności od korelacji decyzji modelowych. Klasteryzacja decyzji dla cech podobieństwa zwiększa rolę różnorodności i pozwala na wyrównanie błędu pojedynczej decyzji na poziomie lokalnym oraz zapewnienie akceptowalnych globalnych wskaźników efektywności klastra. Zaletą proponowanego podejścia jest możliwość budowania zespołu w oparciu o właściwości parametrów korelacji.
EN
A ship moving over the surface of water generates disturbances that are perceived as noise, both in the air and under water. Due to its density, water is an excellent medium for transmitting acoustic waves over long distances. This article describes the impact of the settings of a ship’s machinery on the nature of the generated noise. Our analysis includes the frequency characteristics of the noise generated by the moving ship. Data were obtained using an underwater measurement system, and the measured objects were two ships moving on specific trajectories with certain machinery settings. The acquired data were analysed in the frequency domain to explore the possibilities of the acoustic classification of ships and diagnostics of source mechanisms.
EN
The microalga Dunaliella has been the focus of attention over recent decades owing to its high biotechnological potential for the production of β-carotene, biofuels and even as a good expression system for the production of recombinant proteins. Different species of this genus have unique features, biological characteristics and biotechnological potential. Therefore, it is necessary to have a clear and reliable taxonomic method to identify different species of Dunaliella. Although several taxonomic systems are available for Dunaliella based on morphological, physiological and molecular features, none of these methods are reliable enough and some controversies exist over different classification systems. In the current study, molecular techniques and bioinformatics tools have been used to re-assess the phylogenetic position of Dunaliella species based on 18S ribosomal DNA (18S rDNA), ITS and rbcL regions. The overall findings based on these markers provide a new and more reliable tool for phylogenetic analysis of Dunaliella species/strains.
EN
In addition to rock waste post-mining waste dump sites also contain coal grains justifying treating the dump sites as secondary mineral deposits. The article presents the results of laboratory tests aimed at determining the possibility of using suspending bed technology to separate a combustible substance from post-mining waste of a 4-0 mm grain size. The test results showed the possibility of obtaining good quality coal concentrates from coal waste of a grain size of 4-1 mm. The need for desludging and densifying the feed for the classifier with an autogenic suspending bed in the case of coal waste beneficiation in a wide 4-0 mm grain size justifies the use of a twochamber device or two separate classifiers for narrower grain size classes. Concepts of systems for the recovery of fine coal grains providing the use of the classifier with autogenous suspending bed for the density distribution of feeds with high ash content are presented. The concepts were developed for beneficiation of the material in a 4-0 mm grain class.
EN
Mammography based breast cancer screening is very popular because of its lower costing and readily availability. For automated classification of mammogram images as benign or malignant machine learning techniques are involved. In this paper, a novel image descriptor which is based on the idea of Radon and Wavelet transform is proposed. This method is quite efficient as it performs well without any clinical information. Performance of the method is evaluated using six different classifiers namely: Bayesian network (BN), Linear discriminant analysis (LDA), Logistic, Support vector machine (SVM), Multilayer perceptron (MLP) and Random Forest (RF) to choose the best performer. Considering the present experimental framework, we found, in terms of area under the ROC curve (AUC), the proposed image descriptor outperforms, upto some extent, previous reported experiments using histogram based hand‐crafted methods, namely Histogram of Oriented Gradient (HOG) and Histogram of Gradient Divergence (HGD) and also Convolution Neural Network (CNN). Our experimental results show the highest AUC value of 0.986, when using only the carniocaudal (CC) view compared to when using only the mediolateral oblique (MLO) (0.738) or combining both views (0.838). These results thus proves the effectiveness of CC view over MLO for better mammogram mass classification.
EN
The use of Multi Agents Systems (MAS), Cloud Computing (CC) and Fuzzy Inference System (FIS) in e-commerce has increased in recent years. The purpose of these systems is to enable users of electronic markets to make transactions in the best conditions and to help them in their decisions. The design and implementation is often characterized by the constant manipulation of information, many of which are imperfect. The use of the multi-agent paradigm for the realization of these systems implies the need to integrate mechanisms that take into account the processing of fuzzy information. This makes it necessary to design multi-agent systems (MAS) with fuzzy characteristics. For the modeling and realization of this system, we chose to use the FMAS model. This paper deals with the presentation of the use of the Fuzzy MAS model for the development of a management and decision support application in a virtual market with high availability. After the presentation of the system to be realized in the first section, we describe in the second section the application of the model FMAS for the design and the realization of this system. We then specify the JADE implementation platform and how the fuzzy agents of our model (Expert, Choice and Query) can be implemented using this platform.
EN
In this work, results of an investigation of the microstructure evolution in Haynes® 230® alloy are presented. The morphological and chemical compositions of the chosen microstructure’s constituents, such as the primary and secondary carbides, were analyzed based on tests in the temperature range 700–800 ◦C for 1000–3000 h. The prediction of phase evolution within the microstructure was proposed based on the analysis of mutual replacement of carbide-forming elements at the carbide/matrix interface. Based on the results, some complementary markers were considered to describe Haynes® 230® microstructure evolution. Qualitative markers, i.e., defined morphological features, were related to the shape and distribution of microstructure constituents. The study also used quantitative markers related to the local chemical compositions of carbide particles, determined as the ratio of the concentrations of carbide-forming elements Crc/Wc, Crc/CrM and Wc/WM. Microstructure maps created on the basis of these complementary markers for the successive annealing stages reflected the course of its morphological evolution.
18
Content available remote Kleje żelowe – cechy szczególne
PL
Powołując się na normę PN-EN 12004-1, autor przedstawia klasyfikację klejów do płytek ceramicznych i okładzin kamiennych. Wyjaśnia specyfikę klejów żelowych, a także zwraca uwagę na konieczność przestrzegania zasad sztuki budowlanej podczas przyklejania płytek.
EN
Referring to the PN-EN 12004-1 standard, the author presents the classification of adhesives for ceramic tiles and stone cladding. He explains the specifics of gel adhesives, and he also draws attention to the need to follow the best construction practices when laying tiles.
EN
As the delivery of good quality software in time is a very important part of the software development process, it's a very important task to organize this process very accurately. For this, a new method of the searching associative rules were proposed. It is based on the classification of all tasks on three different groups, depending on their difficulty, and after this, searching associative rules among them, which will help to define the time necessary to perform a specific task by the specific developer.
EN
The aim of this article was to determine the effect of principal component analysis on the results of classification of spongy tissue images. Four hundred computed tomography images of the spine (L1 vertebra) were used for the analyses. The images were from fifty healthy patients and fifty patients diagnosed with osteoporosis. The obtained tissue image samples with a size of 50x50 pixels were subjected to texture analysis. As a result, feature descriptors based on a grey level histogram, gradient matrix, RL matrix, event matrix, autoregressive model and wavelet transform were obtained. The results obtained were ranked in importance from the most important to the least important. The first fifty features from the ranking were used for further experiments. The data were subjected to the principal component analysis, which resulted in a set of six new features. Subsequently, both sets (50 and 6 traits) were classified using five different methods: naive Bayesian classifier, multilayer perceptrons, Hoeffding Tree, 1-Nearest Neighbour and Random Forest. The best results were obtained for data on which principal components analysis was performed and classified using 1-Nearest Neighbour. Such an algorithm of procedure allowed to obtain a high value of TPR and PPV parameters, equal to 97.5%. In the case of other classifiers, the use of principal component analysis worsened the results by an average of 2%.
PL
Celem niniejszego artykułu było określenie wpływu analizy głównych składowych na wyniki klasyfikacji obrazów tkanki gąbczastej. Do analiz wykorzystano czterysta obrazów tomografii komputerowej kręgosłupa (kręg L1). Obrazy pochodziły od pięćdziesięciu zdrowych pacjentów oraz pięćdziesięciu pacjentów ze zdiagnozowaną osteoporozą. Uzyskane próbki obrazowe tkanki o wymiarze 50x50 pikseli poddano analizie tekstury. W wyniku tego otrzymano deskryptory cech oparte na histogramie poziomów szarości, macierzy gradientu, macierzy RL, macierzy zdarzeń, modelu autoregresji i transformacie falkowej. Otrzymane wyniki ustawiono w rankingu ważności od najistotniejszej do najmniej ważnej. Pięćdziesiąt pierwszych cech z rankingu wykorzystano do dalszych eksperymentów. Dane zostały poddane analizie głównych składowych wskutek czego uzyskano zbiór sześciu nowych cech. Następnie oba zbiory (50 i 6 cech) zostały poddane klasyfikacji przy użyciu pięciu różnych metod: naiwnego klasyfikatora Bayesa, wielowarstwowych perceptronów, Hoeffding Tree, 1-Nearest Neighbour and Random Forest. Najlepsze wyniki uzyskano dla danych, na których przeprowadzono analizę głównych składowych i poddano klasyfikacji za pomocą 1-Nearest Neighbour. Taki algorytm postępowania pozwolił na uzyskanie wysokiej wartości parametrów TPR oraz PPV, równych 97,5%. W przypadku pozostałych klasyfikatorów zastosowanie analizy głównych składowych pogorszyło wyniki średnio o 2%.
first rewind previous Strona / 32 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.