Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 56

Liczba wyników na stronie
first rewind previous Strona / 3 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  generalization
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 3 next fast forward last
EN
In this work, a study focusing on proposing generalization metrics for Deep Reinforcement Learning (DRL) algorithms was performed. The experiments were conducted in DeepMind Control (DMC) benchmark suite with parameterized environments. The performance of three DRL algorithms in selected ten tasks from the DMC suite has been analysed with existing generalization gap formalism and the proposed ratio and decibel metrics. The results were presented with the proposed methods: average transfer metric and plot for environment normal distribution. These efforts allowed to highlight major changes in the model’s performance and add more insights about making decisions regarding models’ requirements.
EN
The issue of building thematic maps of erosion dissection, despite its wide demand in various fields of human activity (construction of hydraulic structures, transport and housing construction, agriculture), still has no clear rules and instructions, which causes different perceptions of the obtained mapping results by specialists. The purpose of the study is to experimentallyn identify the change in the index of erosive dissection depending on the scale of the initial data, the size of the cell, the method of constructing the thematic map, etc. The methods used in this research are the method of mathematical statistics, GIS mapping and modelling, spatial analysis, and change detection. For each of the selected methods of thematic mapping, we compiled the cartograms that allow the visual tracking of changes in the elements of the erosion network depending on the geometric characteristics of the scale and cell size. The dimensions and characteristics with optimal results were substantiated. The main feature of erosional dissection mapping of any territory is to detect the negative relief or concave upward forms. The result is a visual perception accompanied by the addition of numerical values. Estimation of erosion dissection by these methods was used in the construction of a thematic map of the foothill territory with a relatively homogeneous relief pattern. It should be noted that the change in the morphometric index happens simultaneously with the change in orographic features. Therefore, for areas with different forms of relief, the combination or use of only one of the above methods allows identifying the optimal and most accurate one among them. The use of well-established methods will facilitate the study of foothill plains or mountainous areas and will allow expanding the scope of the use of thematic maps for applied purposes and forecasting.
EN
Reduplicative linguistic patterns have been used as evidence for explicit algebraic variables in models of cognition.1 Here, we show that a variable-free neural network can model these patterns in a way that predicts observed human behavior. Specifically, we successfully simulate the three experiments presented by Marcus et al. (1999), as well as Endress et al.’s (2007) partial replication of one of those experiments. We then explore the model’s ability to generalize reduplicative mappings to different kinds of novel inputs. Using Berent’s (2013) scopes of generalization as a metric, we claim that the model matches the scope of generalization that has been observed in humans. We argue that these results challenge past claims about the necessity of symbolic variables in models of cognition.
4
Content available remote Development of Ensemble Tree Models for Generalized Blood Glucose Level Prediction
EN
Type-1 diabetes (T1D) patients must carefully monitor their insulin doses to avoid serious health complications. An effective regimen can be designed by predicting accurate blood glucose levels (BGLs). Several physiological and data-driven models for BGL prediction have been designed. However, less is known on the combination of different traditional machine learning (ML) algorithms for BGL prediction. Furthermore, most of the available models are patient-specific. This research aims to evaluate several traditional ML algorithms and their novel combinations for generalized BGL prediction. The data of forty T1D patients were generated using the Automated Insulin Dosage Advisor (AIDA) simulator. The twenty-four hour time-series contained samples at fifteen-minute intervals. The training data was obtained by joining eighty percent of each patient's time-series, and the remaining twenty percent time-series was joined to obtain the testing data. The models were trained using multiple patients' data so that they could make predictions for multiple patients. The traditional non-ensemble algorithms: linear regression (LR), support vector regression (SVR), k-nearest neighbors (KNN), multi-layer perceptron (MLP), decision tree (DCT), and extra tree (EXT) were evaluated for forecasting BGLs of multiple patients. A new ensemble, called the Tree-SVR model, was developed. The BGL predictions from the DCT and the EXT models were fed as features into the SVR model to obtain the final outcome. The ensemble approach used in this research was based on the stacking technique. The Tree-SVR model outperformed the non-ensemble models (LR, SVR, KNN, MLP, DCT, and EXT) and other novel Tree variants (Tree-LR, Tree-MLP, and Tree-KNN). This research highlights the utility of designing ensembles using traditional ML algorithms for generalized BGL prediction.
EN
The authors of the review aim to understand and assess cartographic Heat Maps’ (HM) designs, tools, and applications. The paper consists of two parts. First describes HM in the context of neocartography and map design by tackling such issues as definition, input data, methods of density determination and generalization, colour schemes, legend construction, and base maps. The second part assesses the range of 17 tools used for creating HM. Tools are divided into non-GIS tools (visualization tools and programming libraries) and GIS applications (desktop and webGIS). GIS desktop software has been selected due to its popularity and wide application. Paper presents an expert assessment of this software with the use of a research questionnaire. The analysis made it possible to develop a division of tools based on their embedding in computer programs and applications and taking into account the types of visualization. It also made it possible to indicate tools that can be used by both professional GIS users (e.g. analysts, cartographers) and the general public, including teachers using HM to visualize geo data for geography lessons. The limitation of the review was the analysis from the expert’s point of view. It would be desirable to include novices perspectives in future studies due to the wide demand for visualization.
6
Content available Inequality for polynomials with prescribed zeros
EN
For a polynomial p(z) of degree n with a zero at β,of order at least k(≥1), it isknown that [wzór]. By considering polynomial p(z) of degree n in the form [wzór], a polynomial of degree n−k, with [wzór] we have obtained [wzór] a generalization of the known result.
7
Content available remote Action Rules of Lowest Cost and Action Set Correlations
EN
A knowledge discovery system is prone to yielding plenty of patterns, presented in the form of rules. Sifting through to identify useful and interesting patterns is a tedious and time consuming process. An important measure of interestingness is: whether or not the pattern can be used in the decision making process of a business to increase profit. Hence, actionable patterns, such as action rules, are desirable. Action rules may suggest actions to be taken based on the discovered knowledge. In this way contributing to business strategies and scientific research. The large amounts of knowledge in the form of rules presents a challenge of identifying the essence, the most important part, of high usability. We focus on decreasing the space of action rules through generalization. In this work, we present a new method for computing the lowest cost of action rules and their generalizations. We discover action rules of lowest cost by taking into account the correlations between individual atomic action sets.
EN
The present paper1 aims to propose a new type of information-theoretic method to maximize mutual information between inputs and outputs. The importance of mutual information in neural networks is well known, but the actual implementation of mutual information maximization has been quite difficult to undertake. In addition, mutual information has not extensively been used in neural networks, meaning that its applicability is very limited. To overcome the shortcoming of mutual information maximization, we present it here in a very simplified manner by supposing that mutual information is already maximized before learning, or at least at the beginning of learning. The method was applied to three data sets (crab data set, wholesale data set, and human resources data set) and examined in terms of generalization performance and connection weights. The results showed that by disentangling connection weights, maximizing mutual information made it possible to explicitly interpret the relations between inputs and outputs.
EN
Urbanization has a far-reaching impact on the environment, economy, politi-cal and social processes. Therefore, understanding the spatial distribution and evolution ofhuman settlements is a key element in planning strategies that ensure the sustainable de-velopment of urban and rural settlements. Accordingly, it is very important to map humansettlements and to monitor the development of cities and villages. Therefore, the problem ofsettlements has found its reflection in the creation of global databases of urban areas. Globalsettlement data have extraordinary value. These data allow us to carry out the quantitativeand qualitative analyses as well as to compare the settlement network at a regional, nationaland global scale. However, the possibility of conducting both spatial and attribute analysesof these data would be even more valuable. The article describes how to prepare raster dataso that they can be implemented into a vector database. It answers the questions whetherit is possible to combine these data with databases available in Poland and what benefits itbrings. It presents the methods of data generalization and the optimization of time and diskspace. As a result of the study, two vector databases with GUF data were developed. Thefirst database resolution is similar to the original (~12 m resolution) database, the seconddatabase contains less detailed (~20 m resolution) data, generalized using mathematicalmorphology. Both databases have been enriched with descriptive data obtained from theNational Geodetic and Cartographic Resource.
EN
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscovered several times. This paper presents evidence that gradient regularization can consistently improve classification accuracy on vision tasks, using modern deep neural networks, especially when the amount of training data is small. We introduce our regularizers as members of a broader class of Jacobian-based regularizers. We demonstrate empirically on real and synthetic data that the learning process leads to gradients controlled beyond the training points, and results in solutions that generalize well.
11
Content available remote State Complexity of Multiple Catenations
EN
We improve some results relative to the state complexity of the multiple catenations described by Gao and Yu. In particular we nearly divide by 2 the size of the alphabet needed for witnesses. We also give some refinements to the algebraic expression of the state complexity, which is especially complex with this operation. We obtain these results by using peculiar DFAs defined by Brzozowski.
PL
Mobilna nawigacja MOBINAV jest przykładem systemu informacji przestrzennej dedykowanego dla rekreacyjnych użytkowników śródlądowych dróg wodnych, realizowanego w ramach projektu badawczego pt. „Mobilna nawigacja śródlądowa”. Do głównych założeń projektu można zaliczyć opracowanie nowego modelu mobilnej prezentacji kartograficznej. W trakcie pracy nad modelem systemu, skupiono się na potrzebach użytkownika końcowego oraz możliwościach technicznych urządzeń mobilnych, których użycie wiąże się z ograniczeniami w wizualizacji danych na stosunkowo małych ekranach. Tak zdefiniowany model zakładał opracowanie niezależnych zestawów danych wykorzystywanych w poszczególnych geokompozycjach składowych, które powstały w wyniku generalizacji podstawowego zestawu danych. Dla obiektów o geometrii liniowej oraz powierzchniowej zastosowano klasyczne algorytmy upraszczania przy poszczególnych skalach wyświetlania map wynikowych. W trakcie wyświetlania obiektów punktowych, zwłaszcza punktów głębokości oraz znaków nawigacyjnych, które mają kluczowe znaczenie w trakcie prowadzenia nawigacji na ekranie urządzenia widoczna była zbyt duża ilość informacji, a przede wszystkim w niektórych miejscach symbole nakładały się na siebie. Konieczna jest, zatem korekta, polegająca na rozsunięciu sygnatur oraz ich dopasowaniu do skali wyświetlania. W artykule przedstawiono propozycję algorytmu wykrywania oraz usuwania konfliktów graficznych dla obiektów o geometrii punktowej dedykowanego budowanemu systemowi mobilnej nawigacji śródlądowej. Zawarto przykładowe wyniki dla poszczególnych skal wyświetlania mapy wynikowej na danych rzeczywistych zaimportowanych z dostępnych źródeł. Przeprowadzone testy pozwalają sądzić, iż zastosowanie przedstawionego w artykule algorytmu w znacznym stopniu ulepsza poprawną interpretację mapy na urządzeniu mobilnym.
EN
The mobile navigation MOBINAV is an example of a spatial information system dedicated for recreational users of inland waters. \MOBINAV is implemented within the research project “Mobile Navigation for Inland Waters”. The main objectives of the project include developing a novel model of mobile cartographic presentation. During the model system development the authors focused on the users’ needs and technical capabilities of mobile devices. Visualisation of spatial data in mobile devices is limited due to small displays. The defined model was to develop independent sets of data used in particular geocompositions, which resulted from generalization of the basic dataset. For polyline and polygon features classical simplification algorithms were used. When point features were displayed too much information was visible and, above all, in some places the symbols were overlapping. This was particularly evident for the depth points and information marks, which are very important during navigation. Therefore, it is necessary to correct a location of signatures and their matching to the scale display. The paper presents a proposed algorithm of detection and removal of graphic conflicts for point features in a mobile navigation system for inland waters. The exemplary results for individual map scales using real data imported from available sources were also included. The conducted tests suggest that the use of the algorithm presented in the paper greatly improves the correct interpretation of maps on mobile devices.
EN
Ship stowage plan is the management connection of quae crane scheduling and yard crane scheduling. The quality of ship stowage plan affects the productivity greatly. Previous studies mainly focuses on solving stowage planning problem with online searching algorithm, efficiency of which is significantly affected by case size. In this study, a Deep Q-Learning Network (DQN) is proposed to solve ship stowage planning problem. With DQN, massive calculation and training is done in pre-training stage, while in application stage stowage plan can be made in seconds. To formulate network input, decision factors are analyzed to compose feature vector of stowage plan. States subject to constraints, available action and reward function of Q-value are designed. With these information and design, an 8-layer DQN is formulated with an evaluation function of mean square error is composed to learn stowage planning. At the end of this study, several production cases are solved with proposed DQN to validate the effectiveness and generalization ability. Result shows a good availability of DQN to solve ship stowage planning problem.
14
EN
For a polynomial [wzór] of degree n having all its zeros in │z│ ≤ K, K ≥1 it is known that max [wzór]. By assuming a possible zero of order m, 0 ≤ m ≤ n - 4, at z = 0, of p(z) for n ≥ k + m + 1 with integer k ≥ 3 we have obtained a new refinement of the known result.
EN
The paper undertakes the subject of spatial data pre-processing for marine mobile information systems. Short review of maritime information systems is given and the focus is laid on mobile systems. The need of spatial data generalization is underlined and the concept of technology for such generalization in mobile system is presented. The research part of the paper presents the results of analyzes on selected parameters of simplification in the process of creating mobile navigation system for inland waters. In the study authors focused on selected layers of system. Models of simplification for layers with line features and with polygons were tested. The parameters of tested models were modified for the purposes of study. The article contains tabular results with statistics and spatial visualization of selected layers for individual scales.
PL
Celem przedstawionych prac badawczych była ocena wpływu NMT o różnej rozdzielczości przestrzennej do ekstrakcji linii ciekowych. Przyjmuje się, że im oczko siatki jest mniejsze, tym model terenu bardziej wiernie oddaje rzeczywistość, a tym samym otrzymane dane (w tym przypadku linie spływu) będą dokładniejsze. Dysponując jednak ogromną ilością danych, które w znaczny sposób wpływają na obniżenie mocy obliczeniowej, należy zadać pytanie, czy można uzyskać równie dobre wyniki zmieniając wielkość oczka siatki, a tym samym zmniejszając rozdzielczość NMT. Do przeprowadzenia badań, mających na celu ocenę przydatności NMT, wykorzystano materiały zgromadzone w CODGiK pozyskane w ramach projektu ISOK. Jako dane źródłowe posłużyły pliki tekstowe w formacie ASCII (XYZ) zawierające wysokości punktów w regularnej siatce GRID o oczku 1m, które następnie zostały zgeneralizowane do siatek GRID o oczkach 2 m, 3 m, 4 m i 5 m. W kolejnym kroku, dla NMT o różnej rozdzielczości, wygenerowano linie ciekowe, które zostały poddane analizie. W celu określenia prawidłowości wyników, przebieg linii spływu został porównany z przebiegiem cieków uzyskanym w wyniku bezpośrednich pomiarów. Wybrano trzy pola testowe o różnej charakterystyce, które posłużyły do zrealizowania prac eksperymentalnych. Uzyskane wyniki pokazują, że największą porównywalność wytyczonych linii do rzeczywistego przebiegu cieku, otrzymano dla obszaru z wyraźnie zaznaczoną doliną rzeczną. Z kolei najtrudniejsze do analizy hydrologicznej (a tym samym do wytyczenia linii spływu) są tereny o płaskich, szerokich dolinach.
EN
The aim of the presented research was to evaluate the effect of ATM of different resolution to the extraction of drainage lines. It is assumed that the smaller the mesh opening is, the more accurately terrain model reflects the reality. Moreover, obtained data (here: flow lines) will be more accurate. However, disposal of a huge amount of data significantly influences the reduction of computational power. should be considered if by changing the size of the mesh and thereby reducing the resolution of DTM can be obtained equally good results. To conduct research which assess the suitability of DTM were used materials collected in CODGiK (Geodesic and Cartographic Documentation Center). Obtained under the project ISOK (IT system of the Country’s Protection Against Extreme Hazards). As the source of data were used text files in ASCII format, containing the height of points with a regular grid with resolution of 1 meter. Next, they were generalized to GRID with a mesh aperture of 1, 2, 3, 4 and 5 meters. In the next step drain lines were generated, which been the subject of analysis to DTM of different resolution. In order to define the accuracy of the results, the course of the flow lines was compared with the drainage lines obtained as a result of direct measurement. Three text Fields with different characteristic were used to present experimental works. The results show that the biggest similarity of delineated flow lines with the real course of drainage line, was obtained for the area of prominent river valley. However, areas of broad and flat valleys are the most difficult to hydrological analysis (and thereby to delineate flow lines).
PL
Jest to próba refleksji nad podstawowymi materiałami kartograficznymi, na bazie których powstają szczegółowe opracowania tematyczne, związane z zagospodarowaniem przestrzennym, krajobrazem, infrastrukturą, pokryciem terenu itp. Powszechny dostęp do takich materiałów za pomocą Geoportalu po wprowadzeniu INSPIRE1 oraz coraz szersza cyfryzacja map rodzi pewne oczekiwania i wymagania co do ich aktualności. Po szczegółowej analizie dostępnych danych kartograficznych, w tym map topograficznych obszaru województwa śląskiego, potrzebnych do różnych opracowań tematycznych, można dojść do wniosku, że wszystkie dostępne mapy można uznać za „historyczne”. Stan zawartej na nich treści uzasadnia taką refleksję.
EN
It is an attempt to reflect upon the primary cartographic materials on the basis of which specific thematic studies associated with spatial development, landscape, infrastructure, land cover, etc. are developed. Universal access to such materials via the Geoportal after the introduction of INSPIRE and the ever increasing digitalization of maps raises certain expectations and requirements for the relevance of data. After a detailed analysis of the available cartographic data (topographic maps) for the Silesian Province area that is needed for various thematic studies, it can be concluded that all available maps can be considered as "historical". The status of their content justifies such reflection.
EN
Gradient descent method is one of the popular methods to train feedforward neural networks. Batch and incremental modes are the two most common methods to practically implement the gradient-based training for such networks. Furthermore, since generalization is an important property and quality criterion of a trained network, pruning algorithms with the addition of regularization terms have been widely used as an efficient way to achieve good generalization. In this paper, we review the convergence property and other performance aspects of recently researched training approaches based on different penalization terms. In addition, we show the smoothing approximation tricks when the penalty term is non-differentiable at origin.
EN
Parallel X-rays are functions that measure the intersection of a given set with lines parallel to a fixed direction in R2. The reconstruction problem concerning parallel X-rays is to reconstruct the set if the parallel X-rays into some directions are given. There are several algorithms to give an approximate solution of this problem. In general we need some additional knowledge on the object to obtain a unique solution. By assuming convexity a suitable finite number of directions is enough for all convex planar bodies to be uniquely determined by their X-rays in these directions [13]. Gardner and Kiderlen [12] presented an algorithm for reconstructing convex planar bodies from noisy X-ray measurements belonging to four directions. For a reconstruction algorithm assuming convexity we can also refer to [17]. An algorithm for the reconstruction of hv-convex planar sets by their coordinate X-rays (two directions) can be found in [18]: given the coordinate X-rays of a compact connected hv-convex planar set K the algorithm gives a sequence of polyominoes Ln all of whose accumulation points (with respect to the Hausdorff metric) have the given coordinate X-rays almost everywhere. If the set is uniquely determined by the coordinate X-rays then Ln tends to the solution of the problem. This algorithm is based on generalized conic functions measuring the average taxicab distance by integration [21]. Now we would like to give an extension of this algorithm that works in the case when only some measurements of the coordinate X-rays are given. Following the idea in [12] we extend the algorithm for noisy X-ray measurements too.
20
Content available remote On the zeros of an analytic function
first rewind previous Strona / 3 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.