Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Ograniczanie wyników
Czasopisma help
Lata help
Autorzy help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 29

Liczba wyników na stronie
first rewind previous Strona / 2 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  parallel computing
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 2 next fast forward last
|
2007
|
tom Vol. 12
149-168
EN
In this paper a two-dimensional conjugated heat transfer both by radiation and conduction in the thermal lightweight insulation layer was investigated. It was assumed that the radiation might be emitted, absorbed and isotropically scattered inside the gray medium. Its walls were opaque, absorbing, emitting and reflecting diffusively. The Alternating Direction Implicit Method and the Finite Volume Method were used for solution of heat conduction equation and radiative transfer equation, respectively. At first the problem was solved sequentially and then by applying the Domain Decomposition Method. Parallel calculations were carried out for two and four sub-domains. Influence of different factors on differences between the results obtained from the parallel computing and from the sequential calculations, on parallel computing efficiency as well as on number of iterations required for heat conduction equation to be effectively solved were studied in this paper.
EN
Some materials-related microstructural problems calculated using the phase-field method are presented. It is well known that the phase field method requires mesh resolution of a diffuse interface. This makes the use of mesh adaptivity essential especially for fast evolving interfaces and other transient problems. Complex problems in 3D are also computationally challenging so that parallel computations are considered necessary. In this paper, a parallel adaptive finite element scheme is proposed. The scheme keeps the level of node and edge for 2D and level of node and face for 3D instead of the complete history of refinements to facilitate derefinement. The information is local and exchange of information is minimized and also less memory is used. The parallel adaptive algorithms that run on distributed memory machines are implemented in the numerical simulation of dendritic growth and capillary-driven flows.
3
Content available remote On Interacting Automata with Limited Nondeterminism
100%
EN
One-way and two-way cellular language acceptors with restricted nondeterminism are investigated. The number of nondeterministic state transitions is regarded as limited resource which depends on the length of the input. We center our attention to real-time, linear-time and unrestricted-time computations. A speed-up result that allows any linear-time computation to be sped-up to real-time is proved. The relationships to deterministic arrays are considered. For an important subclass a characterization in terms of deterministic language families and e-free homomorphisms is given. Finally we prove strong closure properties of languages acceptable with a constant number of nondeterministic transitions.
EN
In this paper we investigate the parallel version of the hierarchical chromosome based genetic algorithm (HCBGA) for finding the optimal initial mesh for self-adaptive hp-Finite Element Method (hp-FEM). The HCBGA algorithm solves the global optimization problem of r refinements in order to provide optimal starting mesh for the hp-FEM that will fit material data and singularities, and will result in the exponential convergence of the hp-FEM. The parallel algorithm is tested on the damaged Step-and-Flash Imprint Lithography problem, modeled by linear elasticity with thermal expansion ':, coefficient.
PL
W artykule tym przedstawiono wersję równoległą algorytmu genetycznego bazującego na hierarchicznym chromosomie (AGBHC) służącego do znajdowania optymalnych siatek począt-kowych dla hp adaptacyjnej metody elementów skończonych (hp-MES). Zaproponowany algorytm AGBHC rozwiązuje problem r adaptacji. Jest to problem globalnej optymalizacji polegający na znalezieniu optymalnej siatki początkowej dla algorytmu automa-tycznej hp adaptacji. Poszukiwana siatka początkowa powinna pasować do przyjętych stałych materiałowych oraz do osobliwości rozwiązania. W efekcie algorytm automatycznej hp adaptacji uruchomiony na takiej optymalnej siatce początkowej powinien dostarczyć eksponencjalną zbieżność dokładności rozwiązania względem rozmiaru siatki obliczeniowej. Algorytm równoległy testowany jest na problemie nanolitografii poprzez wyciskanie i naświetlanie, modelowanym z pomocą liniowej sprężystości ze współczynnikiem rozszerzalności cieplnej.
|
2009
|
tom Vol. 93, nr 4
411-434
EN
The paper presents a general methodology for an efficient parallelization of the fully automatic hp-adaptive Finite Element Method (hp-FEM). The self-adaptive hp-FEM algorithm expressed in terms of the graph grammar productions is analyzed by utilizing the Partitioning Communication Agglomeration Mapping (PCAM) model. The computational tasks are defined over a graph model of the computational mesh. It is done for all parts of the algorithm: the generation of an initial mesh, direct solver (including the integration and elimination of degrees of freedom), mesh transformations (including the h and p refinements), as well as the selection of the optimal refinements. The computation and communication complexities of the resulting parallel algorithms are analyzed. The paper is concluded with the sequence of massive parallel computations. >From the performed tests it implies that the code scales well up to 200 processors
6
Content available remote Parallel self - adaptive hp finite element method with shared data structure
100%
EN
In this paper we present a new parallel algorithm of the self-adaptive hp Finite Element Method (hp-FEM) with shared data structures. The algorithm generates in a tully automatic mode (without any user interaction) a sequence of meshes delivering exponential convergence of the prescribed quantity of interest with respect to the mesh size (number of degrees of freedom). The sequence of meshes is generated from the prescribed initial mesh, by performing h (breaking elements into smaller elements), p (adjusting polynomial orders of approximation) or hp (both) refirements on selected finite elements. The new parallel implementation utilizes a computational mesh shared between multiple processors. Ali computational algorithms, including automatic hp adaptivity and the solver, work fully in parallel. We present details of the parallel self-adaptive hp-FEM algorithm with shared computational domain, as well as its efficiency measurements. The presentation is enriched by numerical results of the 3D DC borehole resistivity measurement simulations.
PL
Artykuł ten przedstawia nowy algorytm równoległy dla hp adaptacyjnej metody elementów skończonych (hp-MES) cechujący się rozproszoną strukturą danych. Algorytm ten generuje w sposób w pełni automatyczny (bez żadnej interakcji użytkownika) ciąg siatek obliczeniowych dostarczających eksponencjalnej zbieżności zadanej funkcji celu względem rozmiaru siatki obliczeniowej (ilości stopni swobody). Algorytm generuje ciąg siatek obliczeniowych począwszy od zadanej siatki początkowej. Kolejne siatki otrzymywane są na drodze h adaptacji (łamania wybranych elementów) lub p adaptacji (zwiększania stopnia aproksymacji wielomianowej) lub hp adaptacji (jednocześnie h i p adaptacji) na wybranych elementach. Algorytm ten pracuje w oparciu o siatkę obliczeniową dzieloną pomiędzy wieloma procesorami. Wszystkie algorytmy obliczeniowe, włączając w to automatyczną hp adaptację oraz algorytm solvera, pracują w pełni równolegle. W artykule tym omawiamy algorytm równoległy oraz analizujemy jego efektywność. Prezentacja wzbogacona jest o wyniki numeryczne dotyczące trójwymiarowych symulacji problemu pomiaru oporowości warstw górotworu dla zadań prądu stałego.
EN
A new iterative non-overlapping domain decomposition method is proposed for solving the one- and two-dimensional Helmholtz equation on parallel computers. The spectral collocation method is applied to solve the Helmholtz equation in each subdomain based on the Chebyshev approximation, while the patching conditions are imposed at the interfaces between subdomains through a correction, being a linear function of the space coordinates. Convergence analysis is performed for two applications of the proposed method (DDLC and DDNNLC algorithms - the meaning of these abbreviations is explained below) based on the works of Zanolli and Funaro et al. Numerical tests have been performed and results obtained using the proposed method and other iterative algorithms have been compared. Parallel performance of the multi-domain algorithms has been analyzed by decomposing the two-dimensional domain into a number of subdomains in one spatial direction. For the one-dimensional problem, convergence of the iteration process was quickly obtained using the proposed method, setting a small value of the ? constant in the Helmholtz equation. Another application of the proposed method may be an alternative to other iterative schemes when solving the two-dimensional Helmholtz equation.
8
100%
EN
Systems of consistent linear equations with symmetric positive semidefinite matrices arise naturally while solving many scientific and engineering problems. In case of a "floating" static structure, the boundary conditions are not sufficient to prevent its rigid body motions. Traditional solvers based on Cholesky decomposition can be adapted to these systems by recognition of zero rows or columns and also by setting up a well conditioned regular submatrix of the problem that is used for implementation of a generalised inverse. Conditioning such a submatrix seems to be related with detection of so called fixing nodes such that the related boundary conditions make the structure as stiff as possible. We can consider the matrix of the problem as an unweighted non-oriented graph. Now we search for nodes that stabilize the solution well-fixing nodes (such nodes are sufficiently far away from each other and are not placed near any straight line). The set of such nodes corresponds to one type of graph center.
9
Content available remote The Parallel Image Processing on the Single-chip Multiprocessor System
100%
|
2001
|
tom vol. 49, nr 1
81-99
EN
In the paper the usage of the Texas Instruments multiprocessor chip the TMS320C80 for the parallel image processing is described. In the real-time implementations of image processing algorithms the performance time is a critical parameter, so very often multiprocessor solutions must be used. The TMS320C80 is composed of one master RISC processor and four parallel DSP processors specialised for efficient image processing. Because these processors are quite loosely coupled and they communicate through the common memory, it is possible to implement for this system many different types of multiprocessor architecture. In the paper the results obtained during the implementation of the chosen image processing algorithms for the different architectures such as SIMD, MIMD, MISD and pipeline structure are presented. The attention is paid to the problem of the matching image processing algorithm to the proper multiprocessor architecture in order to minimise the computation time.
|
2010
|
tom z. 63
29-30
EN
The article presents advantages of using parallel computing in solving engineering problems as well as author's remarks about legitimacy of using this technology.
11
Content available remote Event-Based Proof of the Mutual Exclusion Property of Peterson’s Algorithm
88%
EN
Proving properties of distributed algorithms is still a highly challenging problem and various approaches that have been proposed to tackle it [1] can be roughly divided into state-based and event-based proofs. Informally speaking, state-based approaches define the behavior of a distributed algorithm as a set of sequences of memory states during its executions, while event-based approaches treat the behaviors by means of events which are produced by the executions of an algorithm. Of course, combined approaches are also possible. Analysis of the literature [1], [7], [12], [9], [13], [14], [15] shows that state-based approaches are more widely used than event-based approaches for proving properties of algorithms, and the difficulties in the event-based approach are often emphasized. We believe, however, that there is a certain naturalness and intuitive content in event-based proofs of correctness of distributed algorithms that makes this approach worthwhile. Besides, state-based proofs of correctness of distributed algorithms are usually applicable only to discrete-time models of distributed systems and cannot be easily adapted to the continuous time case which is important in the domain of cyber-physical systems. On the other hand, event-based proofs can be readily applied to continuous-time / hybrid models of distributed systems. In the paper [2] we presented a compositional approach to reasoning about behavior of distributed systems in terms of events. Compositionality here means (informally) that semantics and properties of a program is determined by semantics of processes and process communication mechanisms. We demonstrated the proposed approach on a proof of the mutual exclusion property of the Peterson’s algorithm [11]. We have also demonstrated an application of this approach for proving the mutual exclusion property in the setting of continuous-time models of cyber-physical systems in [8]. Using Mizar [3], in this paper we give a formal proof of the mutual exclusion property of the Peterson’s algorithm in Mizar on the basis of the event-based approach proposed in [2]. Firstly, we define an event-based model of a shared-memory distributed system as a multi-sorted algebraic structure in which sorts are events, processes, locations (i.e. addresses in the shared memory), traces (of the system). The operations of this structure include a binary precedence relation ⩽ on the set of events which turns it into a linear preorder (events are considered simultaneous, if e1 ⩽ e2 and e2 ⩽ e1), special predicates which check if an event occurs in a given process or trace, predicates which check if an event causes the system to read from or write to a given memory location, and a special partial function “val of” on events which gives the value associated with a memory read or write event (i.e. a value which is written or is read in this event) [2]. Then we define several natural consistency requirements (axioms) for this structure which must hold in every distributed system, e.g. each event occurs in some process, etc. (details are given in [2]). After this we formulate and prove the main theorem about the mutual exclusion property of the Peterson’s algorithm in an arbitrary consistent algebraic structure of events. Informally, the main theorem states that if a system consists of two processes, and in some trace there occur two events e1 and e2 in different processes and each of these events is preceded by a series of three special events (in the same process) guaranteed by execution of the Peterson’s algorithm (setting the flag of the current process, writing the identifier of the opposite process to the “turn” shared variable, and reading zero from the flag of the opposite process or reading the identifier of the current process from the “turn” variable), and moreover, if neither process writes to the flag of the opposite process or writes its own identifier to the “turn” variable, then either the events e1 and e2 coincide, or they are not simultaneous (mutual exclusion property).
EN
The paper deals with the problem of optimal path planning for a sensor network with mutliple mobile nodes, whose measurements are supposed to be primarily used to estimate unknown parameters of a system modelled by a partial differential equation. The adopted framework permits to consider two- or three-dimensional spatial domains and correlated observations. Since the aim is to maximize the accuracy of the estimates, a general functional defined on the relevant Fisher information matrix is used as the design criterion. Central to the approach is the parameterization of the sensor trajectories based on cubic B-splines. The resulting finite-dimensional global optimization problem is then solved using a parallel version of the tunneling algorithm. A numerical example is included to clearly demonstrate the idea presented in the paper.
|
2008
|
tom Vol. 54, No 3
377-388
EN
This paper presents the parallel method for state equation solving. The general idea of this method is based on the division of the integration interval into sub-intervals, in which the values of state variables are computed in parallel with the use of one of the well-known sequential numerical methods for state equation solving. Computations in particular sub-intervals require knowledge of initial conditions at the beginning of each sub-interval. In the proposed method the initial conditions are determined on the basis of an approximation of the convergence graph by the exponential function.
14
Content available remote Cellular Devices and Unary Languages
88%
|
2007
|
tom Vol. 78, nr 3
343-368
EN
Devices of interconnected parallel acting sequential automata are investigated from a language theoretic point of view. Starting with the well-known result that each unary language accepted by a deterministic one-way cellular automaton (OCA) in real time has to be a regular language, we will answer the three natural questions `How much time do we have to provide?' `How much power do we have to plug in the single cells (i.e., how complex has a single cell to be)?' and `How can we modify the mode of operation (i.e., how much nondeterminism do we have to add)?' in order to accept non-regular unary languages. We show the surprising result that for classes of generalized interacting automata parallelism does not yield to more computational capacity than obtained by a single sequential cell. Moreover, it is proved that there exists a unary complexity class in between the real-time and linear-time OCA languages, and that there is a gap between the unary real-time OCA languages and that class. Regarding nondeterminism as limited resource it is shown that a slight increase of the degree of nondeterminism as well as adding two-way communication reduces the time complexity from linear time to real time. Furthermore, by adding a wee bit nondeterminism an infinite hierarchy of unary language families dependent on the degree of nondeterminism is derived.
15
75%
EN
This work analyses the performance of Hadoop, an implementation of the MapReduce programming model for distributed parallel computing, executing on a virtualisation environment comprised of 1 + 16 nodes running the VMWare workstation software. A set of experiments using the standard Hadoop benchmarks has been designed in order to determine whether or not significant reductions in the execution time of computations are experienced when using Hadoop on this virtualisation platform on a departmental cloud. Our findings indicate that a significant decrease in computing times is observed under these conditions. They also highlight how overheads and virtualisation in a distributed environment hinder the possibility of achieving the maximum (peak) performance.
|
2011
|
tom Vol. 52, nr 8
114-118
EN
In the paper new implementation of 3D data registration algorithm based on combined point to point and point to plane ICP (lterative Closest Point) methods with an application of parallel computing is shown. Modern graphic processor unit with NVIDIACUDA technology is used for k-nearest neighbor search routine based on regular grid decomposition. Proposed method of 3D space decomposition guarantees shorter time of exeoution compared to classical approach k-d tree because of no need for complex data structure building, it offers comparable convergence. In the paper empirical evaluation of proposed algorithm is shown. It is based on data set delivered by mobile robot equipped with commercial available 3D laser measurement system working in INDOOR environment. Demonstrated experiments show potential practical application of parallel computing dedicated for On-Line computation
PL
W artykule przedstawiono nową implementacje algorytmu dopasowania dwóch chmur punktów 3D realizującą połączenie dwóch klasycznych metod - point to point oraz point to plane z zastosowaniem obliczeń równoległych. Wykorzystano nowoczesny procesor GPU (Graphic Procesor Unit) z technologią NVIDIA CUDA między innymi do realizacji procedury poszukiwania najbliższych sąsiadów (k-Nearest Neighbours k-NN) działającej na bazie dekompozycji przestrzeni 3D w regularną siatkę. Proponowana metoda dekompozycji przestrzeni 3D gwarantuje krótszy czas działania w porównaniu do klasycznego podejścia drzewa typu k-d (k-d tree) ze względu na brak potrzeby budowy skomplikowanej struktury danych, zapewniając jednocześnie porównywalną zbieżność algorytmu. W artykule przedstawiono empiryczne badanie algorytmu na bazie zbioru danych dostarczonych przez robota mobilnego wyposażonego w komercyjnie dostępny laserowy system pomiarowy 3D pracującego w środowisku IN-DOOR. Przedstawione eksperymenty pokazują potencjalne praktyczne zastosowanie obliczeń równoległych w aplikacjach działających w trybie On-Line
17
Content available remote An Approach to Robust Visual Knife Detection
75%
|
|
tom Vol. 20, No. 2
215-227
EN
Computerised monitoring of CCTV images is attracting a lot of attention both from potential end-users seeking to increase the effectiveness of their video surveillance systems and as a popular research topic as new methods and algorithms are being developed. In this paper an approach to detecting knives in images is presented. It is based on the use of Histograms of Oriented Gradients (HOG), feature descriptors invariant to geometric and photometric transformations except for rotation. We introduce a dataset containing images of knives in different backgrounds and in varying lighting conditions and evaluate the performance of an HOG-based SVM classifier. We study the question of creating a detector based on knife blade colour and discuss the use of GPU parallel computing as a method of speeding up the detection process.
18
Content available remote Randomized PRAM simulation
75%
EN
The parallel random access machine (PRAM) is the most commonly used general-purpose machine model for describing parallel computations. Unfortunately the PRAM model is not physically realizable, since on large machines a parallel shared memory access can only be accomplished at the cost of a significant time delay. A number of PRAM simulation algorithms are known. The algorithms allow execution of PRAM programs on more realistic parallel machines. We study the randomized simulation of an exclusive read, exclusive write (EREW) PRAM on a module parallel computer (MPC). The simulation is based on utilizing universal hashing. The optimally efficient simulation involving parallel slackness is also investigated. The results of our experiments performed on the MPC built upon IMS T9000 transputers throw some light on the question whether using the PRAM model in parallel computations is practically viable given the present state of transputer technology.
PL
Metoda elementów skończonych jest szeroko stosowana do modelowania zjawisk fizycznych, w tym procesów odkształcania metali. Ze względu na złożoność zjawisk fizycznych zachodzących podczas takich procesów zachodzi konieczność stosowania skomplikowanych modeli matematycznych umożliwiających opisanie zachowania się metalu z zadowalającą dokładnością. Istotnym aspektem modelowania zjawisk jest nie tylko symulacja zachowania się samego materiału w czasie procesu, ale również dążenie do kontrolowania parametrów procesu w zakresie umożliwiającym uzyskanie wyrobu o jak najlepszych parametrach użytkowych. Jednym z takich zastosowań jest sterowanie procesem dogniatania pasma ze strefą półciekłą, tzw. „soft reduction”. Ze względu na charakterystykę procesu ciągłego odlewania stali niewielkie zmiany parametrów mogą mieć zasadniczy wpływ na całość procesu z zatrzymaniem produkcji włącznie. Wykorzystanie modelu matematycznego wraz z metodą elementów skończonych dostarcza informacji opisujących ww. proces. Aby wyniki odzwierciedlały rzeczywistość w sposób zadowalający, niezbędne jest zastosowanie wystarczającej dokładności obliczeń. Wiąże się to z koniecznością generowania gęstej siatki elementów skończonych. Powoduje to powstawanie dużych układów równań. Ze względu na ograniczoną ilość pamięci operacyjnej komputerów, obliczenia z bardzo dużą dokładnością mogą okazać się niemożliwe do wykonania lub czas potrzebny na uzyskanie wyników będzie zbyt długi. W celu wyeliminowania tego ograniczenia stosuje się superkomputery lub klastry o dużej liczbie komputerów połączonych szybką siecią. Drugie z wymienionych rozwiązań jest szeroko stosowane ze względów ekonomicznych, w tym dostępności odpowiedniego sprzętu komputerowego, jak również ze względu na uzyskiwane moce obliczeniowe. Jedną z szeroko stosowanych technologii dla klastrów jest interfejs przesyłania komunikatów. Wykorzystując tę technologię problem symulacji walcowania ze strefą półciekłą można przyspieszyć poprzez równoległe rozwiązanie układów równań powstałych w wyniku dyskretyzacji przy użyciu MES, jak również poprzez dekompozycję sieci elementów skończonych i dystrybucję komponentów pomiędzy dostępne procesory. Niniejsza praca przedstawia problem symulacji procesu walcowania pasma ze strefą półciekłą. Obliczenia równoległe wykorzystano ze względu na konieczność uzyskania wyników z wysoką dokładnością w krótkim czasie. Dokonano optymalnej dekompozycji sieci elementów skończonych w celu zminimalizowania obciążenia węzłów obliczeniowych oraz dekompozycji pasmowej uzyskanego układu równań.
EN
The finite element method is very often applied to modelling of physical phenomena – among them metal deformation. Taking into account the complexity of physical phenomena accompanying steel deformation the mathematical models should ensure acceptable precision. A very important aspect of the simulation is not only the metal behaviour but also the control of process parameters leading to the best product. One of the processes which strongly require controlled deformation is the so called “soft reduction” of plates with mushy zone. In case of casting process small changes of input parameters may have large effect on whole process. Application of appropriate mathematical model and right finite element discretization is very helpful and is a source of information important for both the casting and the integrated casting and rolling technologies. Acceptable computing precision is required to guarantee the process reliability. To achieve high precision of calculations a high density mesh must be generated. Hence, the solution of large set of equations is required. Because the operating memory of a single computer is limited, the computing with high precision may be impossible or the computation time may be too long. To eliminate this restriction a supercomputer or a cluster of a number of computers working within fast net have to be used. The latter possibility is often the preferred one as it is less expensive, easily accessible and provides a high computing power. One of the widely used methods for cluster component cooperation is the application of message passing interface. With this technology the simulation of mushy steel rolling can be performed in parallel way in both the cases: parallel solution of equation set and decomposition of the mesh between available processors. The current paper is dedicated to the simulation of rolling of steel plates with mushy zone. Requirements concerning both high precision results and short computation time necessitate application of parallel computing. For proper load balancing an optimization of the decomposition has been done.
|
2006
|
tom R. 6, nr 10
167-180
PL
W przypadku systemów czasu rzeczywistego o ostrych ograniczeniach czasowych czas obliczeń jest najcenniejszym zasobem. W celu zagwarantowania dochowania ograniczeń czasowych przez szeregowane zadania rozwinięto metody formalne określane mianem teorii szeregowania zadań. Jednym z najbardziej popularnych algorytmów szeregujących jest Rate Monotonie Scheduling, przeznaczony do szeregowania zadań jednoprocesorowych. Artykuł stanowi propozycję adaptacji algorytmu szeregującego zadania Rate Monotonie Scheduling dla przypadku szeregowania zadań wieloprocesorowych. Istota metody zaproponowanej przez autora polega na binaryzacji okresów szeregowanych zadań i ich łączeniu w większe jednostki zwane super-zadaniami. Zaproponowana przez autora metoda pozwala na wykorzystanie podstawowych twierdzeń związanych z Rate Monotonie Scheduling do dowodzenia szeregowalności również zbiorów zadań wieloprocesorowych.
EN
In the case of hard real-time systems the computational time is the most precious resource that must be very carefully used. The theory of task scheduling delivers formal methods that can guarantee that time constraints are met for all the tasks being scheduled. One of the most popular scheduling algorithm is Rate Monotonic Scheduling that is devoted for the purpose of scheduling uniprocessor tasks. The paper is the proposition of implementation of Rate Monotonic Scheduling also for multiprocessor tasks. The clue of the method that was proposed by this author is the application of binarization of tasks periods and concatenation of tasks into the units of higher rank that was called by this author supertasks. The method allows for the application of Rate Monotonic Scheduling theory to prove the schedulability of the set of multiprocessor tasks.
first rewind previous Strona / 2 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.