Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 43

Liczba wyników na stronie
first rewind previous Strona / 3 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  złożoność obliczeniowa
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 3 next fast forward last
PL
W artykule dokonano oceny bezpieczeństwa algorytmów zapewniających poufność oraz uwierzytelnienie danych wymienianych w sieci 5G na odcinku między urządzeniem mobilnym a stacją bazową. Algorytmy zapewniające poufność 128-NEA oraz algorytmy umożliwiające uwierzytelnienie danych 128-NIA są wystarczające do zapewnienia ochrony danych nieklasyfikowanych o różnych stopniach wrażliwości, przed adwersarzem klasycznym oraz wykorzystującym komputer kwantowy. Nie jest jednak możliwe ich użycie do ochrony krajowych informacji niejawnych w rozumieniu Ustawy o ochronie informacji niejawnych.
EN
The article assesses the security of algorithms that ensure confidentiality and authentication of data exchanged in the 5G network in the section between the mobile device and the base station. The 128-NEA confidentiality algorithms and 128-NIA data authentication algorithms are sufficient to protect unclassified data with varying degrees of sensitivity against classical and quantum computer adversaries. However, it is not possible to use them to protect national classified information according to the Protection of Classified Information Act.
2
Content available Wprowadzenie do algorytmów rekurencyjnych
PL
Artykuł prezentuje, czym jest rekurencja, jakie są jej mocne i słabe strony. Zostało w nim także zaprezentowanych i omówionych kilka prostych algorytmów rekurencyjnych. W artykule przedstawiono rekurencyjne i iteracyjne wersje algorytmów potęgowania, obliczania silni, obliczania wyrazów ciągu Fibonacciego, a także rekurencyjne wersje rozwiązania problemów wież Hanoi i reprezentacji liczby naturalnej w postaci sumy naturalnych składników.
PL
W artykule na przykładzie problemu poszukiwania indeksów największego i najmniejszego elementu w tablicy pokazano, w jaki sposób należy analizować algorytmy pod kątem oceny ich efektywności czasowej. Przedstawiono także, czym jest złożoność obliczeniowa algorytmu i jak ją szacować uwzględniając wpływ operacji dominującej na czas wykonania. Zaproponowano 6 różnych wersji algorytmów rozwiązujących postawione zadanie i dla każdej z nich policzono złożoność danego algorytmu.
4
Content available remote Minimizing Tardiness in a Scheduling Environment with Jobs' Hierarchy
EN
In many scheduling environments, some jobs have higher priority than others. We consider a new model, motivated by real-life behavior, in which the priority among jobs is defined by a dominance hierarchy. Specifically, the jobs are arranged in hierarchy levels, and high ranking jobs are ready to accept only outcomes in which the service they receive is better than the service of subordinate jobs. We define the model and the set of feasible schedules formally, and provide algorithms and hardness proofs for two classical problems: minimizing the maximal tardiness and minimizing the number of tardy jobs. We provide optimal algorithms or hardness proofs for these problems, distinguishing between a global objective function and a multi-criteria objective.
5
Content available remote Combinatorial Etude
EN
The purpose of this article is to consider a special class of combinatorial problems, the so called Prouhet-Tarry Escot problem, solution of which is realized by constructing finite sequences of ±1. For example, for fixed p∈N, is well known the existence of np∈N with the property: any set of np consecutive integers can be divided into 2 sets, with equal sums of its p[th] powers. The considered property remains valid also for sets of finite arithmetic progressions of complex numbers.
EN
The log-rank test and Cox’s proportional hazard model can be used to compare survival curves but are limited by strict statistical assumptions. In this study, we introduce a novel, assumption-free method based on a random forest algorithm able to compare two or more survival curves. A proportion of the random forest’s trees with sufficient complexity is close to the test’s p-value estimate. The pruning of trees in the model modifies trees’ complexity and, thus, both the method’s robustness and statistical power. The discussed results are confirmed using a simulation study, varying the survival curves and the tree pruning level.
PL
Dokument RFC 6379 definiuje zbiór czterech zestawów algorytmów kryptograficznych wraz z parametrami bezpieczeństwa do kryptograficznej ochrony korporacyjnych wirtualnych sieci prywatnych VPN. W związku wykorzystaniem asymetrycznego schematu uzgadniania kluczy sesji podatnego na atak z użyciem algorytmu Shora oraz kombinacji obniżonych parametrów bezpieczeństwa algorytmów symetrycznych, informacje zabezpieczane poprzez zdefiniowane tam mechanizmy kryptograficzne, wraz z rozwojem komputerów kwantowych, nie będą odporne na kryptoanalizę przy ich wykorzystaniu.
EN
RFC 6379 defines the collection consisted of four cryptographic algorithms sets with security parameters for protection of corporate virtual private VPNs. Due to the use of an asymmetric key agreement scheme vulnerable to an attack using the Shor algorithm and combination of reduced safety parameters of symmetric algorithms, information secured by the cryptographic mechanisms defined there, along with the development of quantum computers, will not be resistant to cryptanalysis using them.
8
Content available remote On Finding the Optimal Tree of a Complete Weighted Graph
EN
We want to find a tree where the path length between any two vertices on this tree is as close as possible to their corresponding distance in the complete weighted graph of vertices upon which the tree is built. We use the residual sum of squares as the optimality criterion to formulate this problem, and use the Cholesky decomposition to solve the system of linear equations to optimise weights of a given tree. We also use two metaheuristics, namely Simulated Annealing (SA) and Iterated Local Search (ILS) to optimise the tree structure. Our results suggest that SA and ILS both perform well at finding the optimal tree structure when the dispersion of distances in the complete graph is large. However, when the dispersion of distances is small, only ILS has a solid performance.
9
Content available remote Multiprocessor Scheduling Problem with Release and Delivery Times
EN
The multiprocessor scheduling problem is defined as follows: tasks have to be executed on several parallel identical processors. For each task we know release time, processing time and delivery time. At most one job can be processed at a time, but all jobs may be simultaneously delivered. Preemption on processors is not allowed. The objective is to minimize the time, by which all tasks are delivered. Scheduling tasks among parallel processors is a NP-hard problem in the strong sense. The best known approximation algorithm is Jackson's algorithm, which generates the list schedule by giving priority to the ready job with the largest delivery time. This algorithm generates no delay schedules. We define an IIT (inserted idle time) schedule as a feasible schedule in which a processor is kept idle at a time when it could begin processing a task. The paper proposes the approximation inserted idle time algorithm for the multiprocessor scheduling. It is proved that deviation of this algorithm from the optimum is smaller then twice the largest processing time. To illustrate the efficiency of our approach we compared two algorithms on randomly generated sets of jobs.
10
Content available remote Conceptual Optimization of a Generalized Net Model of a Queuing System
EN
The problem of conceptual optimization of Generalized Nets (GNs) models is discussed. An overview of some operators for complexity of GNs and relations with respect to them is presented. Some new operators and relations are defined. A GN model of a queuing system with finite capacity of the buffer and server, and FIFO discipline of service of the requests, is optimized with respect to some of the operators for complexity.
EN
Comparison of two time-event survival curves representing two groups of individuals' evolution in time is relatively usual in applied biostatistics. Although the log-rank test is the suggested tool how to face the above-mentioned problem, there is a rich statistical toolbox used to overcome some of the properties of the log-rank test. However, all of these methods are limited by relatively rigorous statistical assumptions. In this study, we introduce a new robust method for comparing two time-event survival curves. We briefly discuss selected issues of the robustness of the log-rank test and analyse a bit more some of the properties and mostly asymptotic time complexity of the proposed method. The new method models individual time-event survival curves in a discrete combinatorial way as orthogonal monotonic paths, which enables direct estimation of the p-value as it was originally defined. We also gently investigate how the surface of an area, bounded by two survival curves plotted onto a plane chart, is related to the test’s p-value. Finally, using simulated time-event data, we check the robustness of the introduced method in comparison with the log-rank test. Based on the theoretical analysis and simulations, the introduced method seems to be a promising and valid alternative to the log-rank test, particularly in case on how to compare two time-event curves regardless of any statistical assumptions.
EN
We consider the classical One-Dimensional Bin Packing Problem (1D-BPP), an NP-hard optimization problem, where, a set of weighted items has to be packed into one or more identical capacitated bins. We give an experimental study on using symmetry breaking constraints for strengthening the classical integer linear programming proposed to optimally solve this problem. Our computational experiments are conducted on the data-set found in BPPLib and the results have confirmed the theoretical results.
13
Content available remote An Exact Two-Phase Method For Optimal Camera Placement In Art Gallery Problem
EN
It is well-known that determining the optimal number of guards which can cover the interior of a simple nonconvex polygon presents an NP-hard problem. The optimal guard placement can be described as a problem which seeks for the smallest number of guards required to cover every point in a complex environment. In this paper, we propose an exact twophase method as well as an approximate method for tackling the mentioned issue. The proposed exact approach in the first phase maps camera placement problem to the set covering problem, while in the second phase it uses famous state-of-the-art CPLEX solver to address set covering problem. The performance of our combined exact algorithm was compared to the performance of the approximate one. According to the results presented in the experimental analysis, it can be seen that the exact approach outperforms the approximate method for all instances.
EN
A comparison of two heuristic algorithms solving a bi-criteria joint location and scheduling (ScheLoc) problem is considered. In this strongly NP-hard problem the sum of job completion times and location investment costs are used to evaluate the solution. The first solution algorithm (EV) uses an evolutionary approach, and the second more time-efficient algorithm (SA) is based on Simulated Annealing.
EN
This article describes the architecture of the Hamming-Lippmann neural network and the math of the modified learning-recognition algorithm and presents some practical aspects for using it for solving an image recognition task. We have created software using C# programming language, that utilized this network as an additional error-correcting procedure, and have solved the task of recognition highly corrupted QR codes (with a connection to the database). Experimental results, of finding the optimal parameters for this algorithm, are presented. This neural network doesn’t require time-consuming computational procedures and large amounts of memory, even for high-resolution and big size images.
PL
W tym artykule opisano architekturę sieci neuronowej Hamminga-Lippmanna oraz matematykę zmodyfikowanego algorytmu rozpoznawania uczenia się, a także przedstawiono kilka praktycznych aspektów korzystania z niej w celu rozwiązania zadania rozpoznawania obrazu. Stworzyliśmy oprogramowanie wykorzystujące język programowania C #, który wykorzystał tę sieć jako dodatkową procedurę korekty błędów i rozwiązaliśmy zadanie rozpoznawania wysoce uszkodzonych kodów QR (w połączeniu z bazą danych). Przedstawiono wyniki eksperymentalne poszukiwania optymalnych parametrów dla tego algorytmu. Opisywana neuronowa nie wymaga czasochłonnych procedur obliczeniowych i dużej ilości pamięci, nawet w przypadku obrazów o wysokiej rozdzielczości i dużych rozmiarach.
EN
Spreading of information within social media and techniques related to viral marketing take more and more attention from companies focused on targeting audiences within electronic systems. Recent years resulted in extensive research centered around spreading models, selection of initial nodes within networks and identification of campaign characteristics affecting the assumed goals. While social networks are usually based on complex structures and high number of users, the ability to perform detailed analysis of mechanics behind the spreading processes is very limited. The presented study shows an approach for selection of campaign parameters with the use of network samples and theoretical models. Instead of processing simulations on large network, smaller samples and theoretical networks are used. Results showed that knowledge derived from relatively smaller structures is helpful for initialization of spreading processes within the target network of larger size. Apart from agent based modeling, multi-criteria methods were used for evaluation of results from the perspective of costs and performance.
17
Content available remote Integration of polynomials over n-dimensional simplices
EN
Integrating an arbitrary polynomial function f of degree D over a general simplex in dimension n is well-known in the state of the art to be NP-hard when D and n are allowed to vary, but it is time-polynomial when D or n are fixed. This paper presents an efficient algorithm to compute the exact value of this integral. The proposed algorithm has a time-polynomial complexity when D or n are fixed, and it requires a reasonable time when the values of D and n are less than 10 using widely available standard calculators such as desktops.
18
Content available remote Customized genetic algorithm for facility allocation using p-median
EN
The p-median problem is classified as a NP-hard problem, which demands a long time for solution. To increase the use of the method in public management, commercial, military and industrial applications, several heuristic methods has been proposed in literature. In this work, we propose a customized Genetic Algorithm for solving the p-median problem, and we present its evaluation using benchmark problems of OR-library. The customized method combines parameters used in previous studies and introduces the evolution of solutions in stationary mode for solving PMP problems. The proposed Genetic Algorithm found the optimum solution in 37 of 40 instances of p-median problem. The mean deviation from the optimal solution was 0.002% and the mean processing time using CPU core i7 was 17.7s.
19
Content available remote An approach to customer community discovery
EN
In the paper, a new multi-level hybrid method of community detection combining a density-based clustering with a label propagation method is proposed. Many algorithms have been applied to preprocess, visualize, cluster, and interpret the data describing customer behavior, among others DBSCAN, RFM, k-NN, UMAP, LPA. In the paper, two key algorithms have been detailed: DBSCAN and LPA. DBSCAN is a density-based clustering algorithm. However, managers usually find the clustering results too difficult to interpret and apply. To enhance the business value of clustering and create customer communities, the label propagation algorithm (LPA) has been proposed due to its quality and low computational complexity. The approach is validated on real life marketing database using advanced analytics platform Upsaily.
EN
In this paper we analyze the performance of five popular programming languages. The efficiency analysis involves the comparison of elapsed time needed for executing the same computations when the degree of complexity rises. In this paper we compare C#, Python, R, Wolfram and Julia programming languages.
first rewind previous Strona / 3 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.