Roundabouts are commonly used worldwide because they offer several advantages over traditional intersections. The capacity that a roundabout can handle is an important factor in ensuring smooth traffic flow at a particular location. Therefore, various models have been developed to describe traffic conditions and driver behaviour at different sites or countries. However, existing models cannot be directly applied to other countries without proper calibration of the models to ensure an accurate estimation of capacity. In this study, five roundabouts in Hungary were selected to develop a general capacity model and compare it with international models. First, all sets of entry and circulating data were obtained from video recordings of each roundabout entry. These data were used to develop a model for each entry and then for each roundabout separately. Finally, all the data sets from all sixteen entries were used to develop a general capacity model (GM). The general capacity model (GM) was compared with the Highway Capacity Manual (HCM) 2016, the Brilon-Bondzio, and the Brilon-Wu models. The maximum capacity of the general capacity model (GM) was 1390 pcu/h, slightly higher than the maximum capacity of the HCM 2016 model of 1380 pcu/h. The percentage differences between the generated general capacity model (GM), HCM 2016, Brilon-Bondzio, and Brilon-Wu models were +0.71%, +12.4%, and +10.7%, respectively.
This paper presents a novel method for measuring the data for evaporation estimation as the key ingredient for the final decision of the reclamation form in the area of the Most Basin. The area has been intensively mined for many decades, resulting in significant landscape devastation, loss of natural habitats, and negative environmental impact. Currently, it is assumed that by 2050, three large-scale reclamation projects will be implemented in the area and it is necessary to decide which form of reclamation to choose. Whether to build lakes according to the currently valid rehabilitation and reclamation plan or to leave the area of the quarries in succession with the support of spontaneous inflow of water up to a naturally sustainable water level. Whether the first or second option is approved, or a combination of both, the prediction of evaporation from the free water surface will always be of great importance. To deal with this goal, the available meteorological data must be combined with a suitable calculation method. In our work, we suggest utilizing a measuring network of meteorological devices that describe the character of the weather in a given area of interest in a long-term time series. Together with the state-of-the-art calibration of models for calculating evaporation, the measurement network helps to provide more accurate evaporation data for a given area. Based on the analysis of research results, it will be possible to choose a specific right decision and thus contribute to the long-term sustainability of these reclamations.
3
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
In the scope of this study, fish fauna of Küçük Menderes River and its tributaries, updated with the comparison of the recent ichthyofaunal studies, as well as the length-weight relations (LWR) and condition factors (CF) were estimated for 7 freshwater fish species belonging to six families from the river basin: endemic Oxynoemacheilus eliasi, Cobitis fahireae; invasive, Carassius gibelio, Atherina boyeri, transloce Perca fluviatilis and the native Squalius fellowesii, Cyprinus carpio. The fish samples were collected with various fishnets and DC electro-fishing devices from six stations in 2018 and 2019. The LWR of the fishes was studied based on 379 specimens. The estimated values of parameter b ranged from 2.884 (A. boyeri) to 3.176 (C. fahireae). The coefficient of determination (R2) was changed between 0.792 to 0.980 for all sampling localities. In the study, Fulton’s condition factor ranged between 0.391 (S. fellowesii) to 3.080 (S. fellowesii); the relative condition factor ranged between 0.346 (O. eliasi) to 2.746 (S. fellowesii), respectively. This research is anticipated to contribute valuable insights for the conservation of the species, while also furnishing essential data to inform future fisheries management studies in the region.
Soil degradation occurs as a result of the ingress and accumulation of excessive amount of pollutants in the soil. The article presents the results of theoretical and experimental studies of the complex effect of soil contamination (concentration of petroleum products, toxic salts, dense residue, sodium ions, sulfate ions, magnesium ions, calcium, chloride ions, bicarbonate ions) on the content of nutrients (alkaline hydrolyzed nitrogen, phosphorus, potassium, humus). A detailed analysis of scientific papers has been carried out, based on which the main scientific tasks solved in the article have been formulated. It has been established that soil-salt processes are insufficiently studied and are the object of scientific research in recent years. At the first stage of research, sampling was carried out and the content of nutrients and pollutants in the soil was determined. Determination of element concentrations was performed by collecting soil samples and their subsequent laboratory testing. At the second stage, a correlation-regression analysis of the obtained data was performed and multiple linear regressions were established. The interaction of substances in the soil was determined by analyzing the obtained multiple linear regressions. Two types of soils were studied: with chloride and with sulfate type of salinization. For soils with chloride type of salinity, dependences have been established for the content of humus, alkaline nitrogen and potassium, while in case of phosphorus multiple linear regression does not exist. For soils with sulfate type of salinization, multiple linear regression dependences of concentrations of alkaline nitrogen, phosphorus, potassium have been determined. It is established that the complex influence of the studied elements is decisive. No regression dependence was found for the humus content, which indicates that the concentration of the studied elements has almost no effect on the humus content in the soil. Comparison of the obtained multiple linear regressions with the results of laboratory studies showed a good correlation between these data series. The obtained regularities of pollutant and nutrient interactions in soils are expected in future to enable creation of scientific bases for development of new methods of desalination of soils polluted by formation waters as well as for planning effective reclamation actions.
PL
W wyniku wnikania i gromadzenia się w glebie nadmiernych ilości zanieczyszczeń następuje degradacja gleby. W artykule przedstawiono wyniki badań teoretycznych i eksperymentalnych złożonego wpływu zanieczyszczenia gleby (stężenie produktów naftowych, toksycznych soli, gęstego osadu, siarczanow, jonów sodu, magnezu, wapnia, chlorków, wodorowęglanów), na zawartość składników pokarmowych (hydrolizowanego alkalicznie azotu, fosforu, potasu, humusu). Przeprowadzona została szczegółowa analiza prac naukowych, na podstawie której sformułowano główne zadania badawcze rozwiązane w artykule. Stwierdzono, że procesy glebowo-solne zbadane są w stopniu niedostatecznym i stanowią one przedmiot badań naukowych w ostatnich latach. W pierwszym etapie badań pobrano próbki i wyznaczono zawartość składników pokarmowych i zanieczyszczeń w glebie. Wyznaczenia stężeń pierwiastków dokonano poprzez pobranie próbek gleb i ich późniejsze badania laboratoryjne. W drugim etapie wykonano analizę korelacyjno-regresyjną uzyskanych danych i ustalono wielokrotne regresje liniowe. Oddziaływanie substancji w glebie określono poprzez analizę otrzymanych wielokrotnych regresji liniowych. Badano dwa rodzaje gleb: o zasoleniu chlorkowym i siarczanowym. Dla gleb o zasoleniu chlorkowym ustalono zależności w odniesieniu do zawartośći humusu, azotu hydrolizowanego alkalicznie i potasu, natomiast dla fosforu regresja liniowa wielokrotna nie wystapiła. Dla gleb o zasoleniu siarczanowym wyznaczono zależności wielokrotnej regresji liniowej stężeń azotu alkalicznego, fosforu, potasu. Ustalono, że decydujące znaczenie ma kompleksowe oddziaływanie badanych pierwiastków. Dla zawartości humusu nie stwierdzono zależności regresji, co wskazuje, że stężenie badanych pierwiastków prawie nie wpływa na zawartość humusu w glebie. Porównanie uzyskanych wielokrotnych regresji liniowych z wynikami badań laboratoryjnych wykazało dobrą korelację między tymi seriami danych. Uzyskane prawidłowości oddziaływania zanieczyszczeń i składników pokarmowych w glebach pozwolą w przyszłości stworzyć naukowe podstawy rozwoju nowych metod odsalania gleb zanieczyszczonych wodami złożowymi, jak również planować efektywne prowadzenie prac rekultywacyjnych.
5
Dostęp do pełnego tekstu na zewnętrznej witrynie WWW
This study investigated the mechanical performance of short aramid fiber on polypropylene, polyethylene, polyamide 6, and polyamide 12. Extrusion, press molding, and CNC cutting methods were used in the production of composite samples. Tensile, three-point bending, drop weight and hardness tests of the composites were carried out. As the fiber volume fractions increased, the mechanical properties of the composites improved, but the most efficient fiber fractions for each matrix changed. To analyze the performance of the fibers in the matrix on the composites, scanning electron microscope (SEM) images of the fractured surfaces as a result of tensile and drop weight tests were examined. As the fiber volume fractions increased, the fiber deformation increased, and as a result, the mechanical performance of the composites was adversely affected. Analysis of variance (ANOVA) and F test were performed using signal/noise values to analyze in detail the effect of experimental parameters on output values. Finally, the results of a regression equation model were compared with the experimental readings. It was found to be in good agreement with the model and the results of the experiment.
Customer churn prediction is used to retain customers at the highest risk of churn by proactively engaging with them. Many machine learning-based data mining approaches have been previously used to predict client churn. Although, single model classifiers increase the scattering of prediction with a low model performance which degrades reliability of the model. Hence, Bag of learners based Classification is used in which learners with high performance are selected to estimate wrongly and correctly classified instances thereby increasing the robustness of model performance. Furthermore, loss of interpretability in the model during prediction leads to insufficient prediction accuracy. Hence, an Associative classifier with Apriori Algorithm is introduced as a booster that integrates classification and association rule mining to build a strong classification model in which frequent items are obtained using Apriori Algorithm. Also, accurate prediction is provided by testing wrongly classified instances from the bagging phase using generated rules in an associative classifier. The proposed models are then simulated in Python platform and the results achieved high accuracy, ROC score, precision, specificity, F-measure, and recall.
The characteristic features of engineering products are revealed. Average industry performance indicators of mechanical engineering enterprises in Ukraine were formed. The competitiveness of mechanical engineering enterprises was studied. The integral indicator of the competitiveness of mechanical engineering enterprises in Ukraine was evaluated. It has been established that the competitiveness industry, despite certain profits received by enterprises, is in a systemic, predictable crisis and only individual enterprises that maintain their own line of economic behavior are successful, increase competitiveness and have prospects for further economic growth.
PL
Ujawniono charakterystyczne cechy produktów inżynieryjnych. Opracowano średnie wskaźniki wydajności przemysłu przedsiębiorstw inżynierii mechanicznej na Ukrainie. Zbadano konkurencyjność przedsiębiorstw przemysłu maszynowego. Oceniono integralny wskaźnik konkurencyjności przedsiębiorstw przemysłu maszynowego na Ukrainie. Ustalono, że branża konkurencyjności, pomimo pewnych zysków uzyskiwanych przez przedsiębiorstwa, znajduje się w systemowym, przewidywalnym kryzysie i tylko pojedyncze przedsiębiorstwa, które utrzymują własną linię zachowań gospodarczych, odnoszą sukcesy, zwiększają konkurencyjność i mają perspektywy dalszego wzrostu gospodarczego.
Land surface temperature (LST) estimation is a crucial topic for many applications related to climate, land cover, and hydrology. In this research, LST estimation and monitoring of the main part of Al-Anbar Governorate in Iraq is presented using Landsat imagery from five years (2005, 2010, 2015, 2016 and 2020). Images of the years 2005 and 2010 were captured by Landsat 5 (TM) and the others were captured by Landsat 8 (OLI/TIRS). The Single Channel Algorithm was applied to retrieve the LST from Landsat 5 and Landsat 8 images. Moreover, the land use/land cover (LULC) maps were developed for the five years using the maximum likelihood classifier. The difference in the LST and normalized difference vegetation index (NDVI) values over this period was observed due to the changes in LULC. Finally, a regression analysis was conducted to model the relationship between the LST and NDVI. The results showed that the highest LST of the study area was recorded in 2016 (min = 21.1°C, max = 53.2°C and mean = 40.8°C). This was attributed to the fact that many people were displaced and had left their agricultural fields. Therefore, thousands of hectares of land which had previously been green land became desertified. This conclusion was supported by comparing the agricultural land areas registered throughout the presented years. The polynomial regression analysis of LST and NDVI revealed a better coefficient of determination (R2) than the linear regression analysis with an average R2 of 0.423.
Mercury and its compounds are among the most dangerous and toxic substances in the environment. As part of the study, several exploratory analyses and statistical tests were conducted to demonstrate how low and stable mercury content is in municipal waste. A statistical analysis of the mercury content in waste (waste codes 19 12 12 and 20 03 01) was carried out using advanced IT tools. Based on 32 results for each waste, the maximum mercury concentration was 0.062 mg/kg dry weight (EWC code 19 12 12) and 0.052 mg/kg dry weight (EWC code 20 03 01). The analysis, data inference, and modeling were performed according to the CRISP-dm methodology. The results obtained were compared with the maximum allowable mercury concentrations for agricultural soils (2 mg/kg dry weight) and the provisions of the Minamata Convention (1 mg/kg). The average, median, and maximum observed mercury concentrations in waste are significantly lower than the assumed levels of 2 mg/kg (permissible concentrations for II-1 soils) and 1 mg/kg (Minamata Convention). The stability of mercury content in waste was examined. Descriptive statistics, statistical tests, and regression modeling were used. The tests and analyses performed showed an insignificant variation in the mercury content of the wastes with codes 19 12 12 and 20 03 01. No trend or seasonality was observed. The analyses and tests performed confirmed that the data are stable, and the values are low.
The prediction of strength properties is a topic of interest in many engineering fields. The common tests used to evaluate rock strength include the uniaxial compressive strength test ( UCS), Brazilian tensile strength ( BTS) and flexural strength ( FS). These tests can only be carried out in the laboratory and involve some difficulties such as preparation of the samples according to standards, amount of samples, and the long duration of test phases. This article aims to suggest equations for the prediction of mechanical properties of aggregates as a function of the P-wave velocity ( Vp) and Schmidt hammer hardness ( SHH) value of intact or in-situ rocks using regression analyses. Within the scope of the study, 90 samples were collected in the south of Türkiye. The mechanical properties, such as uniaxial compressive strength, Brazilian tensile strength and flexural strength of specimens, were determined in the laboratory and investigated in relation to P-wave velocity, and Schmidt hardness. Using regression techniques, various models were developed, and comparisons were made to find the optimum models using a coefficient of determination (R2) and p value (sig) performance indexes. Simple and multiple regression analysis found powerful correlations between mechanical properties and P-wave velocity and Schmidt hammer hardness. In addition, the prediction equations were compared with previous studies. The results obtained from this study indicate that the results of simple test methods, such as Vp or SHH values, of rock used for aggregate could be used to predict some mechanical properties. Thus, it will be possible to obtain information about the mechanical properties of aggregates in the study area in a faster and more practical way by using predictive models.
This study aims at developing a machine learning based classification and regression-based models for slope stability analysis. 1140 different cases have been analysed using the Morgenstern price method in GeoSlope for non-homogeneous cohesive slopes as input for classification and regression-based models. Slope failures presents a serious challenge across many countries of the world. Understanding the various factors responsible for slope failure is very crucial in mitigating this problem. Therefore, different parameters which may be responsible for failure of slope are considered in this study. 9 different parameters (cohesion, specific gravity, slope angle, thickness of layers, internal angle of friction, saturation condition, wind and rain, blasting conditions and cloud burst conditions) have been identified for the purpose of this study including internal, external and factors representing the geometry of the slope has been included. Four different classification algorithms namely Random Forest, logistic regression, Support Vector Machine (SVM), and K Nearest Neighbor (KNN) has been modelled and their performances have been evaluated on several performance metrics. A similar comparison based on performance indices has been made among three different regression models Decision tree, random forest, and XGBoost regression.
Road infrastructure is aimed to be sustainable construction in today’s condition of heavy traffic. Depending on geotechnical characteristics of soils there are chosen adequate techniques for compaction, meaning: type of compaction, equipment, compaction parameters and, if possible, computer aided acquisition and processing of data. This paper presents research results on the vibratory roller compaction process of road soils, from the point of view of process mathematically modeling and statistically modeling of process parameters interdependence. The obtained regression model is innovative one and fit for further application in optimization (by AI and IoT) of the compaction process. Good correlation of all the results (self-pulsation values) proves the adequate assumptions for both modeling and experimenting. Further development of this research is intended to develop a special software for direct correlation of road geographical position and soil characteristics to the compaction process parameters optimum values.
Artificial intelligence is becoming commonplace in various research and industrial fields. In tribology, various statistical and predictive methods allow an analysis of numerical data in the form of tribological characteristics and surface structure geometry, to mention just two examples. With machine learning algorithms and neural network models, continuous values can be predicted (regression), and individual groups can be classified. In this article, we review the machine learning and neural networks application to the analysis of research results in a broad context. Additionally, a case study is presented for selected machine learning tools based on tribological tests of padding welds, from which the tribological characteristics (friction coefficient, linear wear) and wear indicators (maximum wear depth, wear area) were determined. The study results were used in exploratory data analysis to establish the correlation trends between selected parameters. They can also be the basis for regression analysis using machine learning algorithms and neural networks. The article presents a case study using these approaches in the tribological context and shows their ability to accurately and effectively predict selected tribological characteristics.
PL
Zastosowanie sztucznej inteligencji w różnych dziedzinach nauki i przemysłu jest coraz bardziej powszechne. Duża różnorodność metod statystycznych i predykcyjnych umożliwia użycie ich również w tribologii. Analiza danych liczbowych w postaci charakterystyk tribologicznych, struktury geometrycznej powierzchni oraz wielu innych wymaga zastosowania narzędzi informatycznych oraz statystycznych. Wykorzystanie algorytmów uczenia maszynowego i budowanie modelu sieci neuronowej umożliwi prognozowanie wartości ciągłych (regresja) oraz klasyfikowanie poszczególnych grup. W artykule autorzy dokonują przeglądu możliwości aplikacyjnych algorytmów uczenia maszynowego i sieci neuronowych do analizy wyników badań w szerokim kontekście. Dodatkowo zaprezentowano studium przypadku dla wybranych narzędzi uczenia maszynowego na podstawie przykładowych badań tribologicznych napoin, dla których przeprowadzono testy, w których wyznaczono charakterystyki tribologiczne (współczynnik tarcia, zużycie liniowe) oraz wskaźniki zużycia (maksymalna głębokość wytarcia, pole wytarcia). Wyniki badań były podstawą do przeprowadzenia analizy eksploracyjnej i posłużyły do wykazania korelacji pomiędzy wybranymi parametrami. Autorzy przekonują, że mogą one być podstawą do analizy regresji z wykorzystaniem algorytmów uczenia maszynowego i sieci neuronowych. W artykule zaprezentowano studium przypadku z wykorzystaniem tych podejść w kontekście tribologicznym oraz pokazano ich zdolność do dokładnego i skutecznego przewidywania wybranych charakterystyk tribologicznych.
This paper provides comprehensive research on the linearity between the efficiency and safety of inland waterway transport (IWT). For this purpose, methods of statistical analysis are used. The efficiency of IWT is expressed by the transportation outputs, which today are strongly affected by the COVID-19 pandemic. This paper aims to find whether the demand for IWT has an impact on the accident occurrence probability. The results show the linearity between shipping accidents and the outputs of freight and passenger IWT. The COVID-19 crisis has a higher impact on the efficiency of passenger navigation than on freight navigation. With reduced demand, the level of safety is higher but at the cost of declining efficiency in the transport sector.
The article studied topics related to measuring people’s sadness. For this purpose, the question was asked which factor: social, economic or climate, matters most. The paper analyzed, using machine learning, statistical data related to the number of suicides against the factors: level of Internet access, average income, temperature in a country and, in addition, population density. The method used was correlational statistical analysis using the K-nearest neighbor (KNN) method and also Pearson’s correlation. The results were visualized in the form of graphs, then subjected to final analysis and included in the form of final conclusions.
In this study, panel regression models for 21 European countries and data covering the period between 2008 and 2014 were used to demonstrate that the distribution of working population across different occupational groups explains cross-country differences in terms of the average effective retirement age. Thus, while the great majority of previous studies verified the causal trade-off investigated on the basis of single-country micro data with reference to one economy, this study takes perspective of cross-country diversity in terms of the investigated relationship. The confirmed link holds even when controlling inter alia for health status, education, unemployment, old-dependency ratio, interest rate, GDP per capita, or the share of salaries and wages in GDP. An important practical implication for the policy-makers is that decisions limited only to the increase in the universal pensionable age cannot be effective, since the occupational composition of an economy is very relevant.
Machine Learning (ML) is a disruptive concept that has given rise to and generated interest in different applications in many fields of study. The purpose of Machine Learning is to solve real-life problems by automatically learning and improving from experience without being explicitly programmed for a specific problem, but for a generic type of problem. This article approaches the different applications of ML in a series of econometric methods. Objective: The objective of this research is to identify the latest applications and do a comparative study of the performance of econometric and ML models. The study aimed to find empirical evidence for the performance of ML algorithms being superior to traditional econometric models. The Methodology of systematic mapping of literature has been followed to carry out this research, according to the guidelines established by [39], and [58] that facilitate the identification of studies published about this subject. Results: The results show, that in most cases ML outperforms econometric models, while in other cases the best performance has been achieved by combining traditional methods and ML applications. Conclusion: inclusion and exclusions criteria have been applied and 52 articles closely related articles have been reviewed. The conclusion drawn from this research is that it is a field that is growing, which is something that is well known nowadays and that there is no certainty as to the performance of ML being always superior to that of econometric models.
Deep Neural Networks (DNNs) have shown great success in many fields. Various network architectures have been developed for different applications. Regardless of the complexities of the networks, DNNs do not provide model uncertainty. Bayesian Neural Networks (BNNs), on the other hand, is able to make probabilistic inference. Among various types of BNNs, Dropout as a Bayesian Approximation converts a Neural Network (NN) to a BNN by adding a dropout layer after each weight layer in the NN. This technique provides a simple transformation from a NN to a BNN. However, for DNNs, adding a dropout layer to each weight layer would lead to a strong regularization due to the deep architecture. Previous researches [1, 2, 3] have shown that adding a dropout layer after each weight layer in a DNN is unnecessary. However, how to place dropout layers in a ResNet for regression tasks are less explored. In this work, we perform an empirical study on how different dropout placements would affect the performance of a Bayesian DNN. We use a regression model modified from ResNet as the DNN and place the dropout layers at different places in the regression ResNet. Our experimental results show that it is not necessary to add a dropout layer after every weight layer in the Regression ResNet to let it be able to make Bayesian Inference. Placing Dropout layers between the stacked blocks i.e. Dense+Identity+Identity blocks has the best performance in Predictive Interval Coverage Probability (PICP). Placing a dropout layer after each stacked block has the best performance in Root Mean Square Error (RMSE).
In this study, the thermal conductivity ratio model for metallic oxide based nano-fluids is proposed. The model was developed by considering the thermal conductivity as a function of particle concentration (percentage volume), temperature, particle size and thermal conductivity of the base fluid and nano-particles. The experimental results for Al2O3, CuO, ZnO, and TiO2 particles dispersed in ethylene glycol, water and a combination of both were adopted from the literature. Artificial neural network (ANN) and power law models were developed and compared with the experimental data based on statistical methods. ANOVA was used to determine the relative importance of contributing factors, which revealed that the concentration of nano-particles in a fluid is the single most important contributing factor of the conductivity ratio.
The global solar radiation is the origin for all environmental processes on the earth and the majority of energy sources are derived from it. The data of solar radiation are required for the design and the study of solar application systems. The more important is the quality of the solar radiation which is defined by the maximum work can be provided by the solar radiation. This quality is measured by the exergy content of a solar radiation. In the present work, a universal pattern has been built to provide a prediction of solar exergy dependently to the geographic location. Fitting models have been developed for exergy account depending on geographic location, based on the linear, quadratic, cubic, logarithmic, exponential, power regression. The Petela model is adopted from literature for exergetic efficiency accounting of solar radiation. The global solar radiation according to ASHRAE model is expressed dependently of the cosine of zenith angle. The developed model is applied on Tunisia regions to predict exergy solar potential. The studied regions are classified regarding the exergy account, high, medium and low solar exergy locations. Results show that generally the solar radiation shows a low degree of exergy content, about 7% of difference.
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.