Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 4

Liczba wyników na stronie
first rewind previous Strona / 1 next fast forward last
Wyniki wyszukiwania
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 1 next fast forward last
EN
Web-based browser fingerprint (or device fingerprint) is a tool used to identify and track user activity in web traffic. It is also used to identify computers that are abusing online advertising and also to prevent credit card fraud. A device fingerprint is created by extracting multiple parameter values from a browser API (e.g. operating system type or browser version). The acquired parameter values are then used to create a hash using the hash function. The disadvantage of using this method is too high susceptibility to small, normally occurring changes (e.g. when changing the browser version number or screen resolution). Minor changes in the input values generate a completely different fingerprint hash, making it impossible to find similar ones in the database. On the other hand, omitting these unstable values when creating a hash, significantly limits the ability of the fingerprint to distinguish between devices. This weak point is commonly exploited by fraudsters who knowingly evade this form of protection by deliberately changing the value of device parameters. The paper presents methods that significantly limit this type of activity. New algorithms for coding and comparing fingerprints are presented, in which the values of parameters with low stability and low entropy are especially taken into account. The fingerprint generation methods are based on popular Minhash, the LSH, and autoencoder methods. The effectiveness of coding and comparing each of the presented methods was also examined in comparison with the currently used hash generation method. Authentic data of the devices and browsers of users visiting 186 different websites were collected for the research.
EN
The paper presents the idea of connecting the concepts of the Vapnik’s support vector machine with Pawlak’s rough sets in one classification scheme. The hybrid system will be applied to classifying data in the form of intervals and with missing values [1]. Both situations will be treated as a cause of dividing input space into equivalence classes. Then, the SVM procedure will lead to a classification of input data into rough sets of the desired classes, i.e. to their positive, boundary or negative regions. Such a form of answer is also called a three–way decision. The proposed solution will be tested using several popular benchmarks.
EN
This paper presents a parallel approach to the Levenberg-Marquardt algorithm (LM). The use of the Levenberg-Marquardt algorithm to train neural networks is associated with significant computational complexity, and thus computation time. As a result, when the neural network has a big number of weights, the algorithm becomes practically ineffective. This article presents a new parallel approach to the computations in Levenberg-Marquardt neural network learning algorithm. The proposed solution is based on vector instructions to effectively reduce the high computational time of this algorithm. The new approach was tested on several examples involving the problems of classification and function approximation, and next it was compared with a classical computational method. The article presents in detail the idea of parallel neural network computations and shows the obtained acceleration for different problems.
EN
This paper presents a new image reconstruction method for spiral cone- beam tomography scanners in which an X-ray tube with a flying focal spot is used. The method is based on principles related to the statistical model-based iterative reconstruction (MBIR) methodology. The proposed approach is a continuous-to-continuous data model approach, and the forward model is formulated as a shift-invariant system. This allows for avoiding a nutating reconstruction-based approach, e.g. the advanced single slice rebinning methodology (ASSR) that is usually applied in computed tomography (CT) scanners with X-ray tubes with a flying focal spot. In turn, the proposed approach allows for significantly accelerating the reconstruction processing and, generally, for greatly simplifying the entire reconstruction procedure. Additionally, it improves the quality of the reconstructed images in comparison to the traditional algorithms, as confirmed by extensive simulations. It is worth noting that the main purpose of introducing statistical reconstruction methods to medical CT scanners is the reduction of the impact of measurement noise on the quality of tomography images and, consequently, the dose reduction of X-ray radiation absorbed by a patient. A series of computer simulations followed by doctor’s assessments have been performed, which indicate how great a reduction of the absorbed dose can be achieved using the reconstruction approach presented here.
first rewind previous Strona / 1 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.