Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 389

Liczba wyników na stronie
first rewind previous Strona / 20 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  accuracy
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 20 next fast forward last
EN
This paper presents a modified algorithm for determining the positioning accuracy of a UAV based on a joint GPS/EGNOS+GPS/SDCM (Global Positioning System/European Geostationary Navigation Overlay Service+Global Positioning System/ System for Differential Corrections and Monitoring) solution. Firstly, the average weighted model for determining the position of the UAV (Unmanned Aerial Vehicle) was developed. The algorithm takes into account the coordinates from the individual GPS/EGNOS and GPS/SDCM solution as well as correction coefficients that are a function of the inverse of the ionospheric VTEC (Vertical TEC) delay. Next the accuracy term was estimated in the form of the position errors and RMS (Root Mean Square) errors. Finally the Kalman filter algorithm was used for improved the position errors and RMS errors. The developed algorithm is concerned with determining the positioning accuracy of the UAV for BLh (B-Latitude, L-Longitude, h-ellipsoidal height) ellipsoidal coordinates. The algorithm was tested on kinematic GPS/SBAS (Global Positioning System/Satellite Based Augmentation System) data recorded by a GNSS (Global Navigation Satellite System) receiver placed on a DJI Matrice 300RTK type unmanned platform. As part of the research test, two flights of the UAV were performed on 16 March 2022 in Olsztyn. In the first flight, the proposed algorithm enabled an increase in UAV positioning accuracy from 4% to 57% after Kalman filter process. In the second flight, on the other hand, UAV positioning accuracy was increased from 6% to 42%. The developed algorithm enabled an increase in UAV positioning accuracy and was successfully tested in two independent flight experiments. Ultimately, further research is planned to modify the algorithm with other correction coefficients.
EN
SBAS systems are applied in precise positioning of UAV. The paper presents the results of studies on the improvement of UAV positioning with the use of the EGNOS+SDCM solutions. In particular, the article focuses on the application of the model of totaling the SBAS positioning accuracy to improve the accuracy of determining the coordinates of UAVs during the realisation of a test flight. The developed algorithm takes into account the position errors determined from the EGNOS and SDCM solutions. as well as the linear coefficients that are used in the linear combination model. The research was based on data from GPS observations and SBAS corrections from the AsteRx-m2 UAS receiver installed on a Tailsitter platform. The tests were conducted in September 2020 in northern Poland. The application of the proposed algorithm that sums up the positioning accuracy of EGNOS and SDCM allowed for the improvement of the accuracy of determining the position of the UAV by 82-87% in comparison to the application of either only EGNOS or SDCM. Apart from that, another important result of the application of the proposed algorithm was the reduction of outlier positioning errors that reduced the accuracy of the positioning of UAV when a single SBAS solution (EGNOS or SDCM) was used. The study also presents the effectiveness of the proposed algorithm in terms of calculating the accuracy of EGNOS+SDCM positioning for the weighted average model. The developed algorithm may be used in research conducted on other SBAS supporting systems.
EN
The field of satellite navigation has seen significant advancements due to the fast development of multi-constellation Global Navigation Satellite Systems (GNSS). Around 150 satellites will be in service when all six systems – GPS, GLONASS, Galileo, BeiDou, QZSS, and NAVIC – are launched by 2030, offering both enormous potential and advantages for research and engineering applications. This study used an experiment on the accuracy, particularly for short, medium, long baselines (Wide Lane ambiguity solution) of the BeiDou, QZSS and QZSS/BeiDou combinations. It showed that with the integration of BeiDou/QZSS static measurements in the study region millimetre-centimetre accuracy for short, medium, and long baselines can be attained. Based on the results of this study, it can be concluded that the 1st (QZSS/BeiDou), 2nd (BeiDou), and 3rd (QZSS) strategies feature different horizontal accuracies for all categories. The obtained results with different satellite configurations for the Fixed-Wide-Lane integer ambiguity solution are compared with each other. Accuracy at the short baseline (BeiDou, QZSS, and BeiDou/QZSS satellites) was obtained in the range of 0.5–0.7 cm. For the medium baseline, it was computed around 1.8–82 cm. For the long baseline, the accuracy was 5.6–13.3 cm.
PL
Tanie skanery z wieloma wiązkami laserowymi takie jak Velodyne, Ouster, Hesai często wykorzystywane są do budowy niedrogich systemów skaningu kinematycznego, w tym systemów plecakowych i bezzałogowych. Niski koszt skutkuje mniejszą jakością pozyskiwanych danych, a parametry dokładnościowe podawane przez producentów często odbiegają od rzeczywistych. Z tego powodu problem oceny dokładności danych pozyskanych za pomocą takich skanerów jest ciągle podnoszony przez naukowców. Metody przez nich stosowane mają na celu ocenę dokładności położenia punktów skaningu i opierają się głownie na punktach i powierzchniach referencyjnych. Należy jednak zaznaczyć, że na dokładność położenia tych punktów wpływ mają różne czynniki, w tym te wynikające z błędów instrumentalnych, wynikające z charakteru mierzonego obiektu, a także danych z innych sensorów (np. dane o trajektorii stosowane w skaningu mobilnym). W tym artykule proponujemy metodę, która pozwala na ocenę jakości obserwacji (odległości i kątów), których błędy wynikają głównie z pierwszego z wymienionych czynników, czyli instrumentu. Metoda ta bazuje na porównaniu obserwacji rzeczywistych z teoretycznymi powstającymi poprzez symulację. Do symulacji rzeczywistych obserwacji stosowany jest wirtualny skaner Velodyne, który umieszczany jest w takiej samej pozycji i orientacji jak rzeczywisty. Obserwacje teoretyczne dla skanera wirtualnego tworzone są w oparciu o znany mechanizm działania skanera oraz dokładną i bardzo gęstą chmurę punktów naziemnego skaningu laserowego. Wykonane dla skanera Velodyne HDL-32E eksperymenty wykazały, że dokładność pomiaru odległości jest porównywalna z podawaną przez producenta, jednak inna dla różnych diod laserowych, a dokładność pomiaru kąta poziomego wynosi około 0,04°. Ponadto wykazano, że częstotliwość wirownia skanera, od której zależy wartość kąta poziomego jest różna od wartości nominalnej i nie jest stała w trakcie całego obrotu. Opracowana metoda symulacji obserwacji może być w przyszłości wykorzystana do kalibracji podobnych skanerów tego typu.
EN
Inexpensive scanners with multiple laser beams such as Velodyne, Ouster, Hesai are often used to build low-cost kinematic scanning systems, including backpack and unmanned systems. Low costs result in lower quality of the acquired data. In addition, the accuracy parameters provided by manufacturers are often different from the actual ones. For this reason, the problem of assessing the accuracy of data obtained using such scanners is investigated by scientists. The methods used for this purpose aim at assessing the accuracy of the position of scanning points and use mainly reference points and surfaces. However, that the accuracy of the location of these points is influenced by various factors, including those resulting from instrumental errors, from the nature of the measured object, as well as data from other sensors (e.g. trajectory data used in mobile scanning). In this article, we propose a method that allows for the assessment of the quality of observations (distances and angles) which errors result mainly from the first of the mentioned factors, i.e. the instrument. Proposed method bases on the comparison of real observations with theoretical ones created through simulation. To simulate real observations, a virtual Velodyne scanner is used, which is placed in the same position and orientation as the real one. Theoretical observations for the virtual scanner are created based on the known mechanism of scanner operation and an accurate and very dense terrestrial laser scanning point cloud. Experiments executed for the Velodyne HDL-32E scanner proved that the accuracy of distance measurement is comparable to that provided by the manufacturer, but different for different laser diodes, while the accuracy of horizontal angle measurement is equal to about 0.04°. Moreover, it was shown that the scanner's rotation frequency, which determines the value of the horizontal angle, is different from the nominal value and is not constant during the entire rotation. The developed observation simulation method can be used in the future to calibrate similar scanners of this type.
EN
Nowadays, Aritificial Intellgience (AI) based models are extensively used in the medical science for early detection of choronic diseases. AI model plays a vital role in detecting cervical cancer in women at early stage. Cervical cancer is abnormal growth of cells in the cervix. Vagina is connected to uterus through the cervix. Mostly, various strains of Human papillomavirus (HPV) cause the infection over the cervix. A prolonged virus infection over cervix causes some cervical cells become cancer cells. It is difficult to dectect early sign of the cervical cancer. The proposed method explores cervical cancer detection and provides information on the necessary tests to be taken.The initial level of testing is achieved by getting information from users directly and processing it using a Decision Tree based classifier model. The classifier provide information on the mandatory tests that have to be taken. Then the secondary level of testing is carried out using Deep Convolution Neural Network model over a Colposcopy image of the cervix to identify the tumor region in the cervix. The model predicts the causes of cervical cancer based on the collected user information. The performance of the algorithm is evaluated based on Test accuracy, Recall, and precision. The highest cervical cancer prediction accuracy is achieved through AI model comprising Decision Tree and Deep Convolution Neural network model.
PL
Obecnie modele oparte na sztucznej inteligencji (AI) są szeroko stosowane w naukach medycznych do wczesnego wykrywania chorób kosmówkowych. Model AI odgrywa kluczową rolę w wykrywaniu raka szyjki macicy u kobiet we wczesnym stadium. Rak szyjki macicy to nieprawidłowy rozrost komórek szyjki macicy. Pochwa jest połączona z macicą poprzez szyjkę macicy. Zakażenie szyjki macicy powodują głównie różne szczepy wirusa brodawczaka ludzkiego (HPV). Długotrwała infekcja wirusowa szyjki macicy powoduje, że niektóre komórki szyjki macicy stają się komórkami nowotworowymi. Trudno jest wykryć wczesne objawy raka szyjki macicy. Proponowana metoda bada wykrywanie raka szyjki macicy i dostarcza informacji na temat niezbędnych badań, które należy wykonać. Początkowy poziom badań osiąga się poprzez bezpośrednie uzyskanie informacji od użytkowników i przetworzenie ich przy użyciu modelu klasyfikatora opartego na drzewie decyzyjnym. Klasyfikator dostarcza informacji na temat obowiązkowych badań, które należy wykonać. Następnie przeprowadza się drugi poziom badań, wykorzystując model sieci neuronowej o głębokim splocie na podstawie obrazu szyjki macicy z kolposkopii w celu zidentyfikowania obszaru nowotworu w szyjce macicy. Model przewiduje przyczyny raka szyjki macicy na podstawie zebranych informacji od użytkownika. Wydajność algorytmu ocenia się na podstawie dokładności testu, przypomnienia i precyzji. Najwyższą dokładność przewidywania raka szyjki macicy osiąga się dzięki modelowi AI obejmującemu drzewo decyzyjne i model sieci neuronowej Deep Convolution.
EN
This paper proposes a methodology of the numerical testing of the discrete, approximated Fractional Order PID Controller (FOPID). The fractional parts of the controller are approximated using the Fractional Order Backward Difference (FOBD) operator. The goal of the analysis is to find the memory length optimum from point of view both accuracy and duration of computations. To do it new cost functions describing both accuracy and numerical complexity were proposed and applied. Results of tests indicate that the optimum memory length lies between 200 and 400. The proposed approach can be also useful to examine of another discrete implementations of a fractional order operator using FOBD.
PL
W artykule zaproponowano metodologię analizy numerycznej dyskretnego, aproksymowanego regulator PID niecałkowitego rzędu (regulator FOPID). Ułamkowe części regulatora są aproksymowane z wykorzystaniem aproksymacji FOBD (Fractional Order Backward Difference). Celem analizy jest znalezienie długości pamięci (wymiaru aproksymacji) optymalnej z punktu widzenia zarówno dokładności, jak i złożoności obliczeniowej. W tym celu zaproponowano i zastosowano nowe funkcje kosztu, opisujące oba te czynniki. Wynik testów wskazują, że optymalna długość pamięci w rozważanej sytuacji powinna leżeć w zakresie między 200 i 400. Proponowane podejście może też być wykorzystane do analizy innych dyskretnych implementacji operatora niecałkowitego rzędu, wykorzystujących operator FOBD.
EN
Introduction: In some clinical cases a full therapeutic dose needs to be delivered in the area close to the skin surface where a high dose gradient and there no charged-particle equilibrium (CPE) exists. The accuracy of dose distribution calculations performed in this region with the treatment planning system is limited. In this work we investigated the usefulness of small pieces of Gafchromic EBT3 film for measurements of the absolute dose value in the area close to the skin surface. Material and methods: The Gafchromic EBT3 film detectors of size 1.0 cm x 1.5 cm were prepared. The film samples were calibrated in 6 MV photon beam (Elekta Versa HD). Calibration was performed in a dose range of 0 – 250 cGy. Films were scanned using the EPSON EXPRESSION 10000 XL flatbed scanner in 48-bit RGB mode, with a resolution of 72 dpi. ImageJ software was used to calculate the dose. Triple-channel film dosimetry was applied. The uncertainty of the dose measurement method was estimated. Film measurements were compared with the dose measurements using ionization chamber. The conformity of measurements was assessed using the metrological compliance test. Results: The relative differences between dose measurements using Gafchromic EBT3 film detectors and ionization chamber for a single square photon beam were -0.8% and 0.3% for a depth of 0.5 cm and 5.0 cm (CPE) respectively. The values of the metrological compatibility test factor ζ were 0.3 and 0.1 respectively. The maximum relative differences for dynamic beams were 0.9% and -1.0% for a depth of 0.5 cm and 5.0 cm respectively. Metrological compatibility test showed also good agreement (ζ=0.3). Conclusions: Small film detectors made of Gafchromic EBT3 film allow for the accurate dose measurements in the high dose gradient region and without CPE. They can be used to validate the calculation of the treatment planning systems also for VMAT techniques.
EN
Over the past decade, studies published on the evaluation of intraoral scanners (IOSs) have mainly considered two parameters, precision and trueness, to determine accuracy. The third parameter, resolution, not much studied, seems essential for an application in dentistry. Objective: The objective of this preliminary study is to create an original method - a Resolution-Trueness-Precision (RTP) protocol to evaluate these three main parameters - resolution trueness and precision - at the same time. Material and Method: A ceramic tip with particular and calibrated dimensions is determined as the reference object and its mesh recorded with a scanning microtomograph, and compared with the one extracted to the IOS. It is the particular geometric shape of the object that will make it possible to simultaneously assess: resolution, trueness and precision. Results: The results have shown a mean resolution of 79.2 μm, a mean for trueness of 17.5 and a mean for precision of 12.3 μm. These values are close to previous results published for this camera. So, the RTP protocol is the first including the three parameters at the same time. Simple, fast and precise, its application can be useful for comparisons between IOSs within research laboratories or test organizations. Finally, this study could be a first step to create a reference kit for practitioners allowing them to control the quality of their IOS over time.
EN
The technology of terrestrial laser scanning and its possibilities are subject of scientific research in the area of geodesy, construction, architecture and even more over the last decades. This method provides point clouds data, which contains full and accurate representation of the geometrical parameters of the examined subject. This publication discusses in short the principles and possibilities for creating a three-dimensional data model using the advantages of terrestrial laser scanning. The building of University of Architecture, civil engineering and geodesy, situated in Semkovo resort, Blagoevgrad district is selected for the purpose of the task. Classical land surveying measurements with a total station and terrestrial laser scanning are used for the creation of the three-dimensional models. A comparison and evaluation of the obtained model is made. The result of this evaluation indicates that the technology of terrestrial laser scanning is efficient for representation of high quality data with a wide scope of advantages such as high range, fast data processing, high precision and accurate details.
EN
The aim of this article is to illuminate some latent systematic faults in the mathematical treatment of precise levelling data. The first one is associated with the use of the average of both measurements of the height differences between the terminal benchmarks in levelling lines. Another weak point in the classical treatment of levelling data is the incomplete minimization of the impact of the spatial network configuration on the produced mean standard errors of the nodal benchmarks from the adjustment. Generating sixty random paired samples of size 1000, derived from three continuous distributions, e.g. Normal (0, 1), Uniform (-1.732, 1.732) and Gamma (1, 1), it was found that the average of two same distributed and ordered observations is very nearby to the theoretical expectation, in comparison to both observations, only in approximately 27-30% of all cases. Contrary, in other 70-74% of cases, either the “first” or the “second” observation is in close proximity to the expectation. The miss of this fact leads to a statistically significant deterioration of the final accuracy of the levelling networks. In the current study, it is also shown that the minimization of the standard errors of the adjusted normal heights of the nodal benchmarks in the Bulgarian Levelling Network 1980 cannot be achieved with the weights w=const.L-1, which are the most popular and used type of weights in the adjustment of geometric levelling networks. Finally, it is illustrated that taking into account the above marks and applying an appropriate adjustment algorithm, the mean of the standard errors of the adjusted heights of the nodal benchmarks in the analysed network is possible to be less than 1mm. The standard error of the adjusted height of the most remoted benchmark “Pushkarov”, which is 598 km far away from the datum point located in Varna, is equal to 1.40mm. The obtained from the adjustment mean standard error for the weight unit is estimated to be 0.164 mm/√km. In comparison, the adjustment mean standard error for the weight unit, but yielded by the classical approach of adjustment of the analysed network, is 1.289 mm/√km or almost 9 times higher. Despite being tedious and time-consuming, it is not on point of discarding the precise geometric levelling as a main geodetic method for solving of a couple of scientific and engineering tasks, where differences in heights have to be determined with the highest accuracy.
EN
The natural way to reduce the duration of measurement of a levelling network is to cut down on the number of levelling lines without damaging the quality of the final results. The main objective of the study is to demonstrate that this is possible without any lack of accuracy, if some mathematical facts regarding the average of both measurements of the line elevations are taken into account. Based on 60 paired random samples of size 1000, derived from different continuous distributions, e.g., N (0, 1), U (-1.732, 1.732) and Gamma (1, 1), each of them with theoretical standard deviation σ=1, it was found that the averages of each pair form new distribution with standard deviation σ≈0.707. However, the samples, which were formed by selecting the nearest to the known theoretical expectation from both measurements and their average have distributions, which standard deviations tend to σ≈0.53, σ≈0.46 and σ≈0.43 for the U (-1.732, 1.732), N (0, 1) and Gamma (1, 1) distributions, respectively. Therefore, if we choose the more appropriate value from the “first”, the “second” measurement and their average, we will increase the accuracy of the network almost √2 times in comparison to the accuracy, yielded by the only use of the averages. If our network contains n lines, the process of finding of these elevation values, which leads to the best fit of the network, is based on 3n single adjustments of the network. In addition, we can minimize the impact of the shape of the network on the final standard errors of the adjusted heights or geopotential numbers of the nodal benchmarks in the network, if we apply some iterative procedures, e.g., Inverse Distance Weighting (IDW), Inverse Absolute Height Weighting (IAHW), etc. In order to check the above explained algorithm, the Second Levelling of Finland network was adjusted in three variants. In the first variant, the whole network was adjusted as a free one. The classical weights w=L-1 were used. In the second variant, the network was separated into two parts. Applying 312 and 314 independent adjustments, the selection of the best fitted values of line elevations was done and the network was adjusted by using them. The IDW and IAHW with power parameter p=5 were finally applied. In the third variant, the network was separated in four parts. Applying 313, 312, 316 and 312 independent adjustments, the new selection of the line elevations was done and the network was adjusted by them. The IDW (p=6.5) and IAHW (p=6) were executed. Comparison of the standard errors of the adjusted geopotential numbers in the separate variants revealed that there was no statistically significant difference between the results, yielded in the second and the third variant. However, these variants produced 3-5 times increase of the accuracy in comparison to the classical first variant. The best results were obtained in the second variant with IAHW, where the mean value of the standard errors of the adjusted geopotential numbers is below 1.4 mgpu.
EN
The Crown of Polish Mountains is a list of mountain peaks that has long attracted significant interest, with all included summits being considered worthy conquering. The proposal to expand this list with additional peaks, termed the “New Crown of Polish Mountains” by historian Krzysztof Bzowski, served as the impetus for a study of examining the accuracy of LiDAR (Light Detection and Ranging) point clouds in the areas of the newly proposed peaks. The primary data source analyzed in this study is the LiDAR point cloud with a density of 4 points per square meter, obtained from the ISOK project. As a secondary LiDAR data source, a self-generated point cloud was utilized, created by using the integrated LiDAR sensor in the iPhone 13 Pro and the free 3dScannerApp mobile application within terrestrial scanning. These datasets were compared against RTK GNSS measurements obtained with a Leica GS16 receiver and mobile measurements conducted using Android smartphones. In addition to analyzing the raw point clouds, the study also involved the visualization of the analyzed areas by the creation of Digital Terrain Models in two software programs: ArcGIS Pro and QGIS Desktop. The research confirmed the known accuracy of ALS point clouds and revealed that the integrated LiDAR sensor in the iPhone 13 Pro demonstrates surprising accuracy. The potential for laser scanning with a smartphone, combined with the capability of conducting mobile GNSS measurements, could revolutionize geodetic surveying and simplify the acquisition of point cloud data.
PL
Korona Gór Polski jest listą szczytów górskich, która od lat reprezentuje wysoki wskaźnik zainteresowania. Wszystkie objęte nią szczyty są warte zdobycia. Propozycja rozszerzenia tej listy o kilka szczytów nazwana została „Nową Koroną Polskich Gór” przez historyka Krzysztofa Bzowskiego i stała się inspiracją do wykonania badania dokładności chmury punktów LIDAR (Light Detection and Ranging) na terenach nowo zaproponowanych szczytów Korony Gór Polski. Chmura punktów LIDAR o gęstości 4 punktów na metr kwadratowy pozyskana w ramach projektu ISOK jest głównym źródłem danych objętych analizą. Jako drugie źródło danych LiDARowych wykorzystano samodzielnie wykonaną chmurę punktów za pomocą wbudowanego sensora LIDAR w iPhone 13 Pro oraz darmowej aplikacji mobilnej 3dScannerApp w ramach naziemnego skaningu. Takie dane porównano do wyników pomiarów RTK GNSS wykonanych odbiornikiem Leica GS16 i pomiaru mobilnego wykonanego za pomocą smartfonów z systemem Android. Oprócz badania surowej chmury punktów podjęto się wizualizacji terenów objętych analizą, za pomocą wykonanych Numerycznych Modeli Terenu w dwóch programach: ArcGIS Pro oraz QGIS Desktop. Badania potwierdziły znaną dokładność chmury punktów ALS i odkryły, iż wbudowany sensor LIDAR w iPhone 13 Pro reprezentuje zaskakującą dokładność. Możliwość skaningu laserowego za pomocą smartfona wraz z możliwością wykonania pomiaru mobilnego GNSS może zrewolucjonizować pomiary geodezyjne oraz ułatwić pozyskiwanie danych chmurowych.
EN
To address the issue of insufficient accuracy in consumer recommendation systems, a new biased network inference algorithm is proposed based on traditional network inference algorithms. This new network inference algorithm can significantly improve the resource allocation ability of the original one, thereby improving recommendation performance. Then, the performance of this algorithm is verified through comparative experiments with network-based inference algorithms, network inference algorithms with initial resource optimization, and heterogeneous network inference algorithms. The results showed that the accuracy of the new network inference algorithm was 24.5%, which was superior to traditional one. In terms of system performance testing, the recommendation hit rate of the new network inference algorithm increased by 13.97%, which was superior to the other three comparative algorithms. The experimental results indicated that a novel network inference algorithm with bias can improve the performance of consumer recommendation systems, providing new ideas for improving the performance of consumer recommendation systems.
PL
Modelowanie nagrzewania indukcyjnego w celu konstrukcji urządzeń jest procesem wymagającym analiz sprzężonych, co najmniej elektromagnetyczno-cieplnych i może być realizowane przy wykorzystaniu modeli polowych i obwodowych. Są to zagadnienia rozbudowane, obejmujące szeroką wiedzę z dziedziny elektrotechniki, elektroniki i termodynamiki. Istotą prowadzonych analiz jest uzyskiwanie rezultatów o wysokiej dokładności w warunkach konieczności stosowania szeregu uproszczeń. W niniejszej pracy scharakteryzowano kilka istotnych czynników wpływających na dokładność numerycznych analiz procesu nagrzewania indukcyjnego, z uwzględnieniem wpływu przyjmowanych uproszczeń w analizie zagadnień cieplnych i rodzaju sprzężenia, będących podstawowymi czynnikami, rzadko uwzględnianymi w obliczeniach tej klasy. Celem pracy jest usystematyzowanie współczesnego stanu wiedzy w zakresie prowadzenia inżynierskich procedur obliczeniowych w zagadnieniach nagrzewania indukcyjnego.
EN
Modeling of induction heating to design physical devices is a process that requires coupled analyses, at least electromagnetic and thermal and can be implemented using field and circuit models. These are extensive issues, covering knowledge in the field of electrical engineering, electronics and thermodynamics. The essence of the analyzes is to obtain high-accuracy results. This paper characterizes several important factors affecting the accuracy of numerical analyzes of the induction heating process, taking into account the impact of the adopted simplifications in the analysis of thermal issues and the type of coupling, which are basic factors rarely taken into account in similar calculations. The aim of the work is to revise the current state of knowledge in the field of engineering computational procedures of induction heating systems.
EN
The key issue for ensuring economic efficiency and continuous operation of conveyor transport is the recognition of the condition of the belt core. Faults in steel cords in the core are not visible during routine visual inspections, but they can be identified using magnetic diagnostic systems such as DiagBelt+. The article presents an analysis of the impact of the sensitivity threshold of the DiagBelt+ system, the diameter of cords in the core, and the belt speed on the quality of signals representing known damage: cutting of cords, their absence, and a reduction in the cross-section of the cord. The study focuses on defects to cords across the belt, as they can weaken the belt's strength and lead to a complete belt failure. The proposed results and analyses contribute to the improvement of the methodology for magnetic examination of the core's condition and the developed diagnostic system DiagBelt+. Consequently, this enhances the reliability and safety of belt conveyors in various industries, including brown coal mines where it has been implemented (PGE GiEK SA KWB O/Bełchatów), as well as in hard coal, limestone, and copper ore mines where it is used to assess the condition of belts with steel cords.
EN
When dealing with a group of patients seeking treatment for heart-related diseases, doctorswho specialize in the diagnosis and treatment of heart-related disorders have a difficultbut critical task. It comes as no surprise that cardiovascular disease is a leading source ofmorbidity and death in contemporary society. An expert system with clear categorizationthat may assist medical professionals in identifying heart disease condition based on theclinical data of a patient is often required by physicians. The aim of this work is to providea method for the prediction and classification of cardiac disease based on machine learningand feature selection. The correlation-based feature selection (CFS) method is applied tothe input data set in order to extract relevant features for analysis. The support vectormachine with radial basis function (SVM RBF) and random forest algorithms are usedhere for data classification. Cleveland heart disease dataset is used in the experiment work.This dataset has 303 instances and 14 attributes. The accuracy, specificity and sensitivityof SVM RBF are higher than those of the random forest algorithm.
EN
Parkinson’s disease is associated with memory loss, anxiety, and depression in the brain. Problems such as poor balance and difficulty during walking can be observed in addition to symptoms of impaired posture and rigidity. The field dedicated to making computers capable of learning autonomously, without having to be explicitly programmed, is known as machine learning. An approach to the diagnosis of Parkinson’s disease, which is based on artificial intelligence, is discussed in this article. The input for this system is provided through photographic examples of Parkinson’s disease patient handwriting. Received photos are preprocessed using the relief feature option to begin the process. This is helpful in the process of selecting characteristics for the identification of Parkinson’s disease. After that, the linear discriminant analysis (LDA) algorithm is employed to reduce the dimensions, bringing down the total number of dimensions that are present in the input data. The photos are then classified via radial basis function-support vector machine (SVM-RBF), k-nearest neighbors (KNN), and naive Bayes algorithms, respectively.
18
Content available remote Analysis of influenc of tensile strength
EN
This paper investigates the influence of the geometric parameters of specimens on the reliability of the obtained tensile strength test results. Based on ISO 527, the shape of the specimens (type 2) and their dimensions were chosen, as well as method. The extreme dimensions of the specimens are juxtaposed to illustrate the differences in the tensile strength results.
EN
In the paper a new, fractional order, discrete model of a two-dimensional temperature field is addressed. The proposed model uses Grünwald-Letnikov definition of the fractional operator. Such a model has not been proposed yet. Elementary properties of the model: practical stability, accuracy and convergence are analysed. Analytical conditions of stability and convergence are proposed and they allow to estimate the orders of the model. Theoretical considerations are validated using exprimental data obtained with the use of a thermal imaging camera. Results of analysis supported by experiments point that the proposed model assures good accuracy and convergence for low order and relatively short memory length.
EN
Nowadays, the progress of technology covers, among other things, the development of modern techniques and high technologies used in land surveys. Unmanned aerial vehicles (UAVs), as a good alternative to conventional land survey techniques, have currently played an increasing role. The advantages of using unmanned aerial vehicles in photogrammetric measurements include a relatively short mission time for large-area surveys. In addition, photogrammetric products have a wider range of applications compared with conventional geodetic surveys. Many scientific publications delve into the quality of photogrammetric products, but the accuracy of UAVs in the context of geodetic standards has not been investigated in full. In this paper, we attempt to fill the observed research gap. Our research has analysed the position of objects recorded in geodetic databases referring to their counterparts based on an accurate orthophotomap from a photogrammetry campaign employing an unmanned aerial vehicle. The outcomes were referenced with land survey accuracy standards set out by relevant legislation. To ensure a smooth assessment of the result's accuracy we designed a computing algorithm with a module for selecting comparable points and verifying the results. The tool can be implemented in surveys carried out in any area thanks to open-source GIS software. Our analysis showed that a detailed orthophotomap delivered using UAVs can be a valuable data source on objects recorded in geodetic databases covering selected cadastral and topographic objects and land development components. A general verification of the accuracy and validity of a geodetic numerical map and preliminary detection of areas for potential updates can be a particularly useful application of photogrammetry.
first rewind previous Strona / 20 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.