Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 51

Liczba wyników na stronie
first rewind previous Strona / 3 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  generalization
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 3 next fast forward last
1
Content available Inequality for Polynomials with Prescribed Zeros
PL
For a polynomial p(z) of degree n with a zero at β,of order at least k(≥1), it isknown that [wzór]. By considering polynomial p(z) of degree n in the form [wzór], a polynomial of degree n−k, with [wzór] we have obtained [wzór] a generalization of the known result.
EN
A knowledge discovery system is prone to yielding plenty of patterns, presented in the form of rules. Sifting through to identify useful and interesting patterns is a tedious and time consuming process. An important measure of interestingness is: whether or not the pattern can be used in the decision making process of a business to increase profit. Hence, actionable patterns, such as action rules, are desirable. Action rules may suggest actions to be taken based on the discovered knowledge. In this way contributing to business strategies and scientific research. The large amounts of knowledge in the form of rules presents a challenge of identifying the essence, the most important part, of high usability. We focus on decreasing the space of action rules through generalization. In this work, we present a new method for computing the lowest cost of action rules and their generalizations. We discover action rules of lowest cost by taking into account the correlations between individual atomic action sets.
EN
The present paper1 aims to propose a new type of information-theoretic method to maximize mutual information between inputs and outputs. The importance of mutual information in neural networks is well known, but the actual implementation of mutual information maximization has been quite difficult to undertake. In addition, mutual information has not extensively been used in neural networks, meaning that its applicability is very limited. To overcome the shortcoming of mutual information maximization, we present it here in a very simplified manner by supposing that mutual information is already maximized before learning, or at least at the beginning of learning. The method was applied to three data sets (crab data set, wholesale data set, and human resources data set) and examined in terms of generalization performance and connection weights. The results showed that by disentangling connection weights, maximizing mutual information made it possible to explicitly interpret the relations between inputs and outputs.
EN
Urbanization has a far-reaching impact on the environment, economy, politi-cal and social processes. Therefore, understanding the spatial distribution and evolution ofhuman settlements is a key element in planning strategies that ensure the sustainable de-velopment of urban and rural settlements. Accordingly, it is very important to map humansettlements and to monitor the development of cities and villages. Therefore, the problem ofsettlements has found its reflection in the creation of global databases of urban areas. Globalsettlement data have extraordinary value. These data allow us to carry out the quantitativeand qualitative analyses as well as to compare the settlement network at a regional, nationaland global scale. However, the possibility of conducting both spatial and attribute analysesof these data would be even more valuable. The article describes how to prepare raster dataso that they can be implemented into a vector database. It answers the questions whetherit is possible to combine these data with databases available in Poland and what benefits itbrings. It presents the methods of data generalization and the optimization of time and diskspace. As a result of the study, two vector databases with GUF data were developed. Thefirst database resolution is similar to the original (~12 m resolution) database, the seconddatabase contains less detailed (~20 m resolution) data, generalized using mathematicalmorphology. Both databases have been enriched with descriptive data obtained from theNational Geodetic and Cartographic Resource.
5
Content available remote Gradient Regularization Improves Accuracy of Discriminative Models
EN
Regularizing the gradient norm of the output of a neural network is a powerful technique, rediscovered several times. This paper presents evidence that gradient regularization can consistently improve classification accuracy on vision tasks, using modern deep neural networks, especially when the amount of training data is small. We introduce our regularizers as members of a broader class of Jacobian-based regularizers. We demonstrate empirically on real and synthetic data that the learning process leads to gradients controlled beyond the training points, and results in solutions that generalize well.
EN
We improve some results relative to the state complexity of the multiple catenations described by Gao and Yu. In particular we nearly divide by 2 the size of the alphabet needed for witnesses. We also give some refinements to the algebraic expression of the state complexity, which is especially complex with this operation. We obtain these results by using peculiar DFAs defined by Brzozowski.
PL
Mobilna nawigacja MOBINAV jest przykładem systemu informacji przestrzennej dedykowanego dla rekreacyjnych użytkowników śródlądowych dróg wodnych, realizowanego w ramach projektu badawczego pt. „Mobilna nawigacja śródlądowa”. Do głównych założeń projektu można zaliczyć opracowanie nowego modelu mobilnej prezentacji kartograficznej. W trakcie pracy nad modelem systemu, skupiono się na potrzebach użytkownika końcowego oraz możliwościach technicznych urządzeń mobilnych, których użycie wiąże się z ograniczeniami w wizualizacji danych na stosunkowo małych ekranach. Tak zdefiniowany model zakładał opracowanie niezależnych zestawów danych wykorzystywanych w poszczególnych geokompozycjach składowych, które powstały w wyniku generalizacji podstawowego zestawu danych. Dla obiektów o geometrii liniowej oraz powierzchniowej zastosowano klasyczne algorytmy upraszczania przy poszczególnych skalach wyświetlania map wynikowych. W trakcie wyświetlania obiektów punktowych, zwłaszcza punktów głębokości oraz znaków nawigacyjnych, które mają kluczowe znaczenie w trakcie prowadzenia nawigacji na ekranie urządzenia widoczna była zbyt duża ilość informacji, a przede wszystkim w niektórych miejscach symbole nakładały się na siebie. Konieczna jest, zatem korekta, polegająca na rozsunięciu sygnatur oraz ich dopasowaniu do skali wyświetlania. W artykule przedstawiono propozycję algorytmu wykrywania oraz usuwania konfliktów graficznych dla obiektów o geometrii punktowej dedykowanego budowanemu systemowi mobilnej nawigacji śródlądowej. Zawarto przykładowe wyniki dla poszczególnych skal wyświetlania mapy wynikowej na danych rzeczywistych zaimportowanych z dostępnych źródeł. Przeprowadzone testy pozwalają sądzić, iż zastosowanie przedstawionego w artykule algorytmu w znacznym stopniu ulepsza poprawną interpretację mapy na urządzeniu mobilnym.
EN
The mobile navigation MOBINAV is an example of a spatial information system dedicated for recreational users of inland waters. \MOBINAV is implemented within the research project “Mobile Navigation for Inland Waters”. The main objectives of the project include developing a novel model of mobile cartographic presentation. During the model system development the authors focused on the users’ needs and technical capabilities of mobile devices. Visualisation of spatial data in mobile devices is limited due to small displays. The defined model was to develop independent sets of data used in particular geocompositions, which resulted from generalization of the basic dataset. For polyline and polygon features classical simplification algorithms were used. When point features were displayed too much information was visible and, above all, in some places the symbols were overlapping. This was particularly evident for the depth points and information marks, which are very important during navigation. Therefore, it is necessary to correct a location of signatures and their matching to the scale display. The paper presents a proposed algorithm of detection and removal of graphic conflicts for point features in a mobile navigation system for inland waters. The exemplary results for individual map scales using real data imported from available sources were also included. The conducted tests suggest that the use of the algorithm presented in the paper greatly improves the correct interpretation of maps on mobile devices.
EN
Ship stowage plan is the management connection of quae crane scheduling and yard crane scheduling. The quality of ship stowage plan affects the productivity greatly. Previous studies mainly focuses on solving stowage planning problem with online searching algorithm, efficiency of which is significantly affected by case size. In this study, a Deep Q-Learning Network (DQN) is proposed to solve ship stowage planning problem. With DQN, massive calculation and training is done in pre-training stage, while in application stage stowage plan can be made in seconds. To formulate network input, decision factors are analyzed to compose feature vector of stowage plan. States subject to constraints, available action and reward function of Q-value are designed. With these information and design, an 8-layer DQN is formulated with an evaluation function of mean square error is composed to learn stowage planning. At the end of this study, several production cases are solved with proposed DQN to validate the effectiveness and generalization ability. Result shows a good availability of DQN to solve ship stowage planning problem.
9
EN
For a polynomial [wzór] of degree n having all its zeros in │z│ ≤ K, K ≥1 it is known that max [wzór]. By assuming a possible zero of order m, 0 ≤ m ≤ n - 4, at z = 0, of p(z) for n ≥ k + m + 1 with integer k ≥ 3 we have obtained a new refinement of the known result.
EN
The paper undertakes the subject of spatial data pre-processing for marine mobile information systems. Short review of maritime information systems is given and the focus is laid on mobile systems. The need of spatial data generalization is underlined and the concept of technology for such generalization in mobile system is presented. The research part of the paper presents the results of analyzes on selected parameters of simplification in the process of creating mobile navigation system for inland waters. In the study authors focused on selected layers of system. Models of simplification for layers with line features and with polygons were tested. The parameters of tested models were modified for the purposes of study. The article contains tabular results with statistics and spatial visualization of selected layers for individual scales.
PL
Celem przedstawionych prac badawczych była ocena wpływu NMT o różnej rozdzielczości przestrzennej do ekstrakcji linii ciekowych. Przyjmuje się, że im oczko siatki jest mniejsze, tym model terenu bardziej wiernie oddaje rzeczywistość, a tym samym otrzymane dane (w tym przypadku linie spływu) będą dokładniejsze. Dysponując jednak ogromną ilością danych, które w znaczny sposób wpływają na obniżenie mocy obliczeniowej, należy zadać pytanie, czy można uzyskać równie dobre wyniki zmieniając wielkość oczka siatki, a tym samym zmniejszając rozdzielczość NMT. Do przeprowadzenia badań, mających na celu ocenę przydatności NMT, wykorzystano materiały zgromadzone w CODGiK pozyskane w ramach projektu ISOK. Jako dane źródłowe posłużyły pliki tekstowe w formacie ASCII (XYZ) zawierające wysokości punktów w regularnej siatce GRID o oczku 1m, które następnie zostały zgeneralizowane do siatek GRID o oczkach 2 m, 3 m, 4 m i 5 m. W kolejnym kroku, dla NMT o różnej rozdzielczości, wygenerowano linie ciekowe, które zostały poddane analizie. W celu określenia prawidłowości wyników, przebieg linii spływu został porównany z przebiegiem cieków uzyskanym w wyniku bezpośrednich pomiarów. Wybrano trzy pola testowe o różnej charakterystyce, które posłużyły do zrealizowania prac eksperymentalnych. Uzyskane wyniki pokazują, że największą porównywalność wytyczonych linii do rzeczywistego przebiegu cieku, otrzymano dla obszaru z wyraźnie zaznaczoną doliną rzeczną. Z kolei najtrudniejsze do analizy hydrologicznej (a tym samym do wytyczenia linii spływu) są tereny o płaskich, szerokich dolinach.
EN
The aim of the presented research was to evaluate the effect of ATM of different resolution to the extraction of drainage lines. It is assumed that the smaller the mesh opening is, the more accurately terrain model reflects the reality. Moreover, obtained data (here: flow lines) will be more accurate. However, disposal of a huge amount of data significantly influences the reduction of computational power. should be considered if by changing the size of the mesh and thereby reducing the resolution of DTM can be obtained equally good results. To conduct research which assess the suitability of DTM were used materials collected in CODGiK (Geodesic and Cartographic Documentation Center). Obtained under the project ISOK (IT system of the Country’s Protection Against Extreme Hazards). As the source of data were used text files in ASCII format, containing the height of points with a regular grid with resolution of 1 meter. Next, they were generalized to GRID with a mesh aperture of 1, 2, 3, 4 and 5 meters. In the next step drain lines were generated, which been the subject of analysis to DTM of different resolution. In order to define the accuracy of the results, the course of the flow lines was compared with the drainage lines obtained as a result of direct measurement. Three text Fields with different characteristic were used to present experimental works. The results show that the biggest similarity of delineated flow lines with the real course of drainage line, was obtained for the area of prominent river valley. However, areas of broad and flat valleys are the most difficult to hydrological analysis (and thereby to delineate flow lines).
PL
Jest to próba refleksji nad podstawowymi materiałami kartograficznymi, na bazie których powstają szczegółowe opracowania tematyczne, związane z zagospodarowaniem przestrzennym, krajobrazem, infrastrukturą, pokryciem terenu itp. Powszechny dostęp do takich materiałów za pomocą Geoportalu po wprowadzeniu INSPIRE1 oraz coraz szersza cyfryzacja map rodzi pewne oczekiwania i wymagania co do ich aktualności. Po szczegółowej analizie dostępnych danych kartograficznych, w tym map topograficznych obszaru województwa śląskiego, potrzebnych do różnych opracowań tematycznych, można dojść do wniosku, że wszystkie dostępne mapy można uznać za „historyczne”. Stan zawartej na nich treści uzasadnia taką refleksję.
EN
It is an attempt to reflect upon the primary cartographic materials on the basis of which specific thematic studies associated with spatial development, landscape, infrastructure, land cover, etc. are developed. Universal access to such materials via the Geoportal after the introduction of INSPIRE and the ever increasing digitalization of maps raises certain expectations and requirements for the relevance of data. After a detailed analysis of the available cartographic data (topographic maps) for the Silesian Province area that is needed for various thematic studies, it can be concluded that all available maps can be considered as "historical". The status of their content justifies such reflection.
EN
Gradient descent method is one of the popular methods to train feedforward neural networks. Batch and incremental modes are the two most common methods to practically implement the gradient-based training for such networks. Furthermore, since generalization is an important property and quality criterion of a trained network, pruning algorithms with the addition of regularization terms have been widely used as an efficient way to achieve good generalization. In this paper, we review the convergence property and other performance aspects of recently researched training approaches based on different penalization terms. In addition, we show the smoothing approximation tricks when the penalty term is non-differentiable at origin.
EN
Parallel X-rays are functions that measure the intersection of a given set with lines parallel to a fixed direction in R2. The reconstruction problem concerning parallel X-rays is to reconstruct the set if the parallel X-rays into some directions are given. There are several algorithms to give an approximate solution of this problem. In general we need some additional knowledge on the object to obtain a unique solution. By assuming convexity a suitable finite number of directions is enough for all convex planar bodies to be uniquely determined by their X-rays in these directions [13]. Gardner and Kiderlen [12] presented an algorithm for reconstructing convex planar bodies from noisy X-ray measurements belonging to four directions. For a reconstruction algorithm assuming convexity we can also refer to [17]. An algorithm for the reconstruction of hv-convex planar sets by their coordinate X-rays (two directions) can be found in [18]: given the coordinate X-rays of a compact connected hv-convex planar set K the algorithm gives a sequence of polyominoes Ln all of whose accumulation points (with respect to the Hausdorff metric) have the given coordinate X-rays almost everywhere. If the set is uniquely determined by the coordinate X-rays then Ln tends to the solution of the problem. This algorithm is based on generalized conic functions measuring the average taxicab distance by integration [21]. Now we would like to give an extension of this algorithm that works in the case when only some measurements of the coordinate X-rays are given. Following the idea in [12] we extend the algorithm for noisy X-ray measurements too.
15
Content available remote On the zeros of an analytic function
EN
Generalization is one of the most important stages of work on cartographic data. It has a particular importance in the study of landscape structure, especially geodiversity. In raster images, it is based on modifying the structure of the image while maintaining its general characteristics. In ArcGIS software, the most important tools for generalization of raster images include: Boundary Clean and Majority Filter. Fragstat software was used for the analysis of structural modifications of the output images and assessment of the effects of generalization. Depending on the options used, both tools (Boundary Clean and Majority Filter) cause different types of modifications in rasters. Elimination of the so-called noise using one of the variants of Majority Filter is the most suitable if we wish to introduce only subtle modifications to the final image. If, however, we expect a greater level of interference in the structure of the source images, using Boundary Clean becomes necessary.
EN
A widely used class of approximate pattern matching algorithms work in two stages, the first being a filtering stage that uses spaced seeds to quickly discards regions where a match is not likely to occur. The design of effective spaced seeds is known to be a hard problem. In this setting, we propose a family of lossless spaced seeds for matching with up to two errors based on mathematical objects known as perfect rulers. We analyze these seeds with respect to the tradeoff they offer between seed weight and the minimum length of the pattern to be matched. We identify a specific property of rulers, namely their skewness, which is closely related to the minimum pattern length of the derived seeds. In this context, we study in depth the specific case of Wichmann rulers and investigate the generalization of our approach to the larger class of unrestricted rulers. Although our analysis is mainly of theoretical interest, we show that for pattern lengths of practical relevance our seeds have a larger weight, hence a better filtration efficiency, than the ones known in the literature.
EN
We aim to establish the multi-modal logic CKn as a baseline for a constructive correspondence theory of constructive modal logics. Just like many classical multi-modal logics may be studied as theories of the basic system K obtained by model-theoretic specialisation, we envisage constructive modal logics to be derived as proof-theoretic enrichments of CKn. The system CKn would then act as a core system for constructive contextual reasoning with controlled information flow. In this paper, as a first step towards this goal, we study CKn as a type theory and introduce its computational λ-calculus, λCKn. Extending previous work on CKn, we present a cut-free contextual sequent system in the spirit of Masini’s two-dimensional generalisation of natural deduction and Brünnler’s nested sequents and give a computational interpretation for CKn following the Curry- Howard Correspondence. The associated modal type theory λCKn permits an interpretation for both the modalities □ and ⋄ of CKn as type operators with simple and independent constructors and destructors, which has been missing in the literature. It is shown that the calculus satisfies subject reduction, strong normalisation and confluence. Since normal forms can be characterised by way of a Gentzen-style typing system with sub-formula property, CKn is suitable for proof search in CKn. At the same time, λCKn enjoys natural deduction style typing which is important for programming applications. In contrast to most existing modal type theories, which are obtained as theories of the constructive modal logic S4, CKn is not bound to a particular contextual interpretation. Thus, λCKn constitutes the core of a functional language which provides static type checking of information processing to support safe contextual navigation in relational structures like those treated by description logics. We review some existing work on modal type theories and discuss their relation to λCKn.
PL
Przedstawione badania dotyczą opracowania algorytmu redukcji ilości danych wysokościowych w postaci numerycznego modelu terenu z lotniczego skanowania laserowego (ALS) dla potrzeb modelowania powodziowego. Redukcja jest procesem niezbędnym w przetwarzaniu ogromnych zbiorów danych z ALS, a jej przebieg nie może mieć charakteru regularnej filtracji danych, co często ma miejsce w praktyce. Działanie takie prowadzi do pominięcia szeregu istotnych form terenowych z punktu widzenia modelowania hydraulicznego. Jednym z proponowanych rozwiązań dla redukcji danych wysokościowych zawartych w numerycznych modelach terenu jest zmiana jego struktury z regularnej siatki na strukturę hybrydową z regularnie rozmieszczonymi punktami oraz nieregularnie rozlokowanymi punktami istotnymi. Celem niniejszego artykułu jest porównanie algorytmów ekstrakcji punktów istotnych z numerycznych modeli terenu, które po przetworzeniu ich z użyciem redukcji danych zachowają swoją dokładność przy jednoczesnym zmniejszeniu rozmiaru plików wynikowych. W doświadczeniach zastosowano algorytmy: indeksu pozycji topograficznej (TPI), Very Important Points (VIP) oraz Z-tolerance, które posłużyły do stworzenia numerycznych modeli terenu, podlegających następnie ocenie w porównaniu z danymi wejściowymi. Analiza taka pozwoliła na porównanie metod. Wyniki badań potwierdzają możliwości uzyskania wysokiego stopnia redukcji, która wykorzystuje jedynie kilka procent danych wejściowych, przy relatywnie niewielkim spadku dokładności pionowej modelu terenu sięgającego kilku centymetrów.
EN
The presented research concerns methods related to reduction of elevation data contained in digital terrain model (DTM) from airborne laser scanning (ALS) in hydraulic modelling. The reduction is necessary in the preparation of large datasets of geospatial data describing terrain relief. Its course should not be associated with regular data filtering, which often occurs in practice. Such a method leads to a number of important forms important for hydraulic modeling being missed. One of the proposed solutions for the reduction of elevation data contained in DTM is to change the regular grid into the hybrid structure with regularly distributed points and irregularly located critical points. The purpose of this paper is to compare algorithms for extracting these key points from DTM. They are used in hybrid model generation as a part of elevation data reduction process that retains DTM accuracy and reduces the size of output files. In experiments, the following algorithms were tested: Topographic Position Index (TPI), Very Important Points (VIP) and Z-tolerance. Their effectiveness in reduction (maintaining the accuracy and reducing datasets) was evaluated in respect to input DTM from ALS. The best results were obtained for the Z-tolerance algorithm, but they do not diminish the capabilities of the other two algorithms: VIP and TPI which can generalize DTM quite well. The results confirm the possibility of obtaining a high degree of reduction reaching only a few percent of the input data with a relatively low decrease of vertical DTM accuracy to a few centimetres. The presented paper was financed by the Foundation for Polish Science - research grant no. VENTURES/2012-9/1 from Innovative Economy program of the European Structural Funds.
EN
Automation of generalization of geographic information is known as one of the biggest challenges facing modern cartography. Realization of such a process demands knowledge base which will help to decide which algorithms in which sequence should be used and how to parameterize them. Author proposes the knowledge base based on non-classical logics: rough and fuzzy. This article presents results of first trials on the fuzzy rules for realization of selection operator. Usage of fuzzy rules and linguistic variables allows better mimic the subjective character of generalization process. Test were established on the data about roads segments coming from Topographical Database (TBD) two test areas. Conducted experiment proved the possibility of utilization of fuzzy rules in the generalization of geographic information. It may be very valuable to use the idea of rough sets and reducts for selection of the attributes which are the most significant in terms of the made decision. This will be the subject of author's further research. Presented research are the initial step for creation of knowledgebase based on non-classical logic (fuzzy and rough).
PL
Problem wykorzystania algorytmów inteligencji obliczeniowej do tworzenia baz wiedzy systemów generalizacji informacji geograficznej jest w ostatniej dekadzie niezwykle często poruszany w kontekście prac koncepcyjnych i ba-dawczych. Autorka referatu podjęła jednak próbę opracowania także prototypu narzędzia informacyjnego automatyzującego proces selekcji wielocechowych obiektów przestrzennych jako źródła danych dla tworzenia map topograficznych w różnych skalach. Opracowany system, wykorzystujący jako silnik obliczeniowy proces wnioskowania rozmytego, jest niezwykle efektywny obliczeniowo, pozwalając zarazem na pełną parametryzowalność systemu generalizacji.
first rewind previous Strona / 3 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.