Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 393

Liczba wyników na stronie
first rewind previous Strona / 20 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  data mining
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 20 next fast forward last
EN
Purpose: Diabetes is a chronic disease that pays for a large proportion of the nation's healthcare expenses when people with diabetes want medical care continuously. Several complications will occur if the polymer disorder is not treated and unrecognizable. The prescribed condition leads to a diagnostic center and a doctor's intention. One of the real-world subjects essential is to find the first phase of the polytechnic. In this work, basically a survey that has been analyzed in several parameters within the poly-infected disorder diagnosis. It resembles the classification algorithms of data collection that plays an important role in the data collection method. Automation of polygenic disorder analysis, as well as another machine learning algorithm. Design/methodology/approach: This paper provides extensive surveys of different analogies which have been used for the analysis of medical data, For the purpose of early detection of polygenic disorder. This paper takes into consideration methods such as J48, CART, SVMs and KNN square, this paper also conducts a formal surveying of all the studies, and provides a conclusion at the end. Findings: This surveying has been analyzed on several parameters within the poly-infected disorder diagnosis. It resembles that the classification algorithms of data collection plays an important role in the data collection method in Automation of polygenic disorder analysis, as well as another machine learning algorithm. Practical implications: This paper will help future researchers in the field of Healthcare, specifically in the domain of diabetes, to understand differences between classification algorithms. Originality/value: This paper will help in comparing machine learning algorithms by going through results and selecting the appropriate approach based on requirements.
EN
Context: Predicting the priority of bug reports is an important activity in software maintenance. Bug priority refers to the order in which a bug or defect should be resolved. A huge number of bug reports are submitted every day. Manual filtering of bug reports and assigning priority to each report is a heavy process, which requires time, resources, and expertise. In many cases mistakes happen when priority is assigned manually, which prevents the developers from finishing their tasks, fixing bugs, and improve the quality. Objective: Bugs are widespread and there is a noticeable increase in the number of bug reports that are submitted by the users and teams’ members with the presence of limited resources, which raises the fact that there is a need for a model that focuses on detecting the priority of bug reports, and allows developers to find the highest priority bug reports. This paper presents a model that focuses on predicting and assigning a priority level (high or low) for each bug report. Method: This model considers a set of factors (indicators) such as component name, summary, assignee, and reporter that possibly affect the priority level of a bug report. The factors are extracted as features from a dataset built using bug reports that are taken from closed-source projects stored in the JIRA bug tracking system, which are used then to train and test the framework. Also, this work presents a tool that helps developers to assign a priority level for the bug report automatically and based on the LSTM’s model prediction. Results: Our experiments consisted of applying a 5-layer deep learning RNN-LSTM neural network and comparing the results with Support Vector Machine (SVM) and K-nearest neighbors (KNN) to predict the priority of bug reports. The performance of the proposed RNN-LSTM model has been analyzed over the JIRA dataset with more than 2000 bug reports. The proposed model has been found 90% accurate in comparison with KNN (74%) and SVM (87%). On average, RNN-LSTM improves the F-measure by 3% compared to SVM and 15.2% compared to KNN. Conclusion: It concluded that LSTM predicts and assigns the priority of the bug more accurately and effectively than the other ML algorithms (KNN and SVM). LSTM significantly improves the average F-measure in comparison to the other classifiers. The study showed that LSTM reported the best performance results based on all performance measures (Accuracy = 0.908, AUC = 0.95, F-measure = 0.892).
3
EN
In this study, the Artificial Neural Network (ANN) models and multiple linear regression techniques were used to estimate the relation between the concentration of total coliform, E. coli and Pseudomonas in the wastewater and the input variables. Two techniques were used to achieve this objective. The first is a classical technique with multiple linear regression models, while the second one is data mining with two types of ANN (Multilayer Perceptron (MLP) and Radial Basis Function (RBF). The work was conducted using (SPSS) software. The obtained estimated results were verified against the measured data and it was found that data mining by using the RBF model has good ability to recognize the relation between the input and output variables, while the statistical error analysis showed the accuracy of data mining by using the RBF model is acceptable. On the other hand, the obtained results indicate that MLP and multiple linear regression have the least ability for estimating the concentration of total coliform, E. coli and pseudomonas in wastewater.
4
EN
The aim of the paper is to present how some of the data mining tasks can be solved using the R programming language. The full R scripts are provided for preparing data sets, solving the tasks and analyzing the results.
EN
Purpose: The main purpose of the article was to present the results of the analysis of the after-sales service process using data mining on the example of data gathered in an authorized car service station. As a result of the completed literature review and identification of cognitive gaps, two research questions were formulated (RQ). RQ1: Does the after-sales service meet the parameters of business process category? RQ2: Is the after-sales service characterized by trends or is it seasonal in nature? Design/methodology/approach: The following research methods were used in the study: quantitative bibliographic analysis, systematic literature review, participant observation and statistical methods. Theoretical and empirical study used R programming language and Gretl software. Findings: Basing on relational database designed for the purpose of carrying out the research procedure, the presented results were of: the analysis of the service sales structure, sales dynamics, as well as trend and seasonality analyses. As a result of research procedure, the effects of after-sales service process were presented in terms of quantity and value (amount). In addition, it has been shown that after-sales service should be identified in the business process category. Originality/value: The article uses data mining and R programming language to analyze the effects generated in after-sales service on the example of a complete sample of 13,418 completed repairs carried out in 2013-2018. On the basis of empirical proceedings carried out, the structure of a customer-supplier relationship was recreated in external and internal terms on the example of examined organization. In addition, the possibilities of using data generated from the domain system were characterized and further research directions, as well as application recommendations in the area of after-sales services was presented.
PL
W artykule dyskutowane są możliwości zastosowania metod syntezy logicznej w zadaniach eksploracji danych. W szczególności omawiana jest metoda redukcji atrybutów oraz metoda indukcji reguł decyzyjnych. Pokazano, że metody syntezy logicznej skutecznie usprawniają te procedury i z powodzeniem mogą być zastosowane do rozwiązywania ogólniejszych zadań eksploracji danych. W uzasadnieniu celowości takiego postępowania omówiono diagnozowanie pacjentów z możliwością eliminowania kłopotliwych badań.
EN
The article discusses the possibilities of application of logic synthesis methods in data mining tasks. In particular, the method of reducing attributes and the method of inducing decision rules is considered. It is shown that by applying specialized logic synthesis methods, these issues can be effectively improved and successfully used for solving data mining tasks. In justification of the advisability of such proceedings, the patient's diagnosis with the possibility of eliminating troublesome tests is discussed.
EN
The high penetration rate that mobile devices enjoy in to day’s society has facilitated the creation of new digital services, with those offered by operators and content providers standing out. However, even this has failed to encourage consumers to express positive opinions on telecommunication services, especially when compared with other sectors. One of the main reasons of the mistrust shown is the low level of quality of customer service provided an area that generates high costs for the operators themselves, due to the high number of people employed at call centers in order to handle the volume of calls received. To face these challenges, operators launched self-care applications in order to provide customers with a tool that would allow them to autonomously manage the services they have subscribed. In this paper, we present an architecture that provides customized information to customers – a solution that is separate from mobile operating systems and communication technologies.
EN
Nuclear power plant process systems have developed great lyover the years. As a large amount of data is generated from Distributed Control Systems (DCS) with fast computational speed and large storage facilities, smart systems have taken over analysis of the process. These systems are built using data mining concepts to understand the various stable operating regimes of the processes, identify key performance factors, makes estimates and suggest operators to optimize the process. Association rule mining is a frequently used data-mining conceptin e-commerce for suggesting closely related and frequently bought products to customers. It also has a very wide application in industries such as bioinformatics, nuclear sciences, trading and marketing. This paper deals with application of these techniques for identification and estimation of key performance variables of a lubrication system designed for a 2.7 MW centrifugal pump used for reactor cooling in a typical 500MWe nuclear power plant. This paper dwells in detail on predictive model building using three models based on association rules for steady state estimation of key performance indicators (KPIs) of the process. The paper also dwells on evaluation of prediction models with various metrics and selection of best model.
EN
In this paper, we look closely at the issue of contaminated data sets, where apart from legitimate (proper) patterns we encounter erroneous patterns. In a typical scenario, the classification of a contaminated data set is always negatively influenced by garbage patterns (referred to as foreign patterns). Ideally, we would like to remove them from the data set entirely. The paper is devoted to comparison and analysis of three different models capable to perform classification of proper patterns with rejection of foreign patterns. It should be stressed that the studied models are constructed using proper patterns only, and no knowledge about thecharacteristics of foreign patterns is needed. The methods are illustrated with a case study of handwritten digits recognition, but the proposed approach itself is formulated in a general manner. Therefore, it can be applied to different problems. We have distinguished three structures: global, local, and embedded, all capable to eliminate foreign patterns while performing classification of proper patterns at the same time. A comparison of the proposed models shows that the embedded structure provides the best results but at the cost of a relatively high model complexity. The local architecture provides satisfying results and at the same time is relatively simple.
EN
Clustering is an attractive technique used in many fields in order to deal with large scale data. Many clustering algorithms have been proposed so far. The most popular algorithms include density-based approaches. These kinds of algorithms can identify clusters of arbitrary shapes in datasets. The most common of them is the Density-Based Spatial Clustering of Applications with Noise (DBSCAN). The original DBSCAN algorithm has been widely applied in various applications and has many different modifications. However, there is a fundamental issue of the right choice of its two input parameters, i.e the eps radius and the MinPts density threshold. The choice of these parameters is especially difficult when the density variation within clusters is significant. In this paper, a new method that determines the right values of the parameters for different kinds of clusters is proposed. This method uses detection of sharp distance increases generated by a function which computes a distance between each element of a dataset and its k-th nearest neighbor. Experimental results have been obtained for several different datasets and they confirm a very good performance of the newly proposed method.
PL
Metodyka automatycznego odkrywania wiedzy o procesach wytwarzania i przetwarzania metali obejmuje problemy związane z (1) akwizycją danych i integracją ich w aspekcie dalszej eksploracji, (2) doborem i adaptacją metod uczenia maszynowego ― indukcji reguł, predykcji zmiennych ilościo wych i jakościowych, (3) formalizacją wiedzy w odpowiednich reprezentacjach: regułowej, zbiorów rozmytych, zbiorów przybliżonych czy wreszcie logiki deskrypcyjnej oraz (4) integracją wiedzy w repozytoriach opisanych modelami semantycznymi, czyli ontologiami. Autor przedstawił możliwość osiągnięcia równowagi pomiędzy wygodą użytkowania a precyzją w przypadku pozyskiwania wiedzy z małych zbiorów. Badania wykazały, że drzewa decyzyjne są wygodnym narzędziem odkrywania wiedzy i dobrze radzą sobie z problemami silnie nieliniowymi, a wprowadzenie dyskretyzacji poprawia ich działanie. Zastosowanie metod analizy skupień umożliwiło też wyciąganie bardziej ogólnych wniosków, przez co udowodniono tezę, że granulacja informacji pozwala znaleźć wzorce nawet w małych zbiorach danych. Opracowano w ramach badań procedurę postępowania w analizie małych zbiorów danych eksperymentalnych dla modeli multistage, multivariate & multivariable, co może w znacznym stopniu uprościć takie badania w przyszłości.
EN
The methodology of automatic knowledge discovery about metal production and processing processes includes problems related to (1) data acquisition and integration in the aspect of further exploration, (2) selection and adaptation of machine learning methods - rule induction, quantitative and qualitative variable prediction, (3) formalization knowledge in appropriate representations: rule, fuzzy sets, rough sets and finally descriptive logic, and (4) integration of knowledge in repositories described by semantic models or ontologies. The author presented the possibility of achieving a balance between ease of use and precision when acquiring knowledge from small collections. Research has shown that decision trees are a convenient tool for discovering knowledge and that they deal well with strongly non-linear problems, and the introduction of discretization improves their operation. The use of cluster analysis methods also made it possible to draw more general conclusions, which proved the thesis that granulation of information allows finding patterns even in small data sets. As part of the research, a procedure was developed for analyzing small experimental data sets for multistage, multivariate & multivariable models, which can greatly simplify such research in the future.
EN
We prove in this paper that the expected value of the objective function of the k-means++ algorithm for samples converges to population expected value. As k-means++, for samples, provides with constant factor approximation for k-means objectives, such an approximation can be achieved for the population with increase of the sample size. This result is of potential practical relevance when one is considering using subsampling when clustering large data sets (large data bases).
EN
Power big data contains a lot of information related to equipment fault. The analysis and processing of power big data can realize fault diagnosis. This study mainly analyzed the application of association rules in power big data processing. Firstly, the association rules and the Apriori algorithm were introduced. Then, aiming at the shortage of the Apriori algorithm, an IM-Apriori algorithm was designed, and a simulation experiment was carried out. The results showed that the IM-Apriori algorithm had a significant advantage over the Apriori algorithm in the running time. When the number of transactions was 100 000, the running of the IM-Apriori algorithm was 38.42% faster than that of the Apriori algorithm. The IM-Apriori algorithm was little affected by the value of supportmin. Compared with the Extreme Learning Machine (ELM), the IM-Apriori algorithm had better accuracy. The experimental results show the effectiveness of the IM-Apriori algorithm in fault diagnosis, and it can be further promoted and applied in power grid equipment.
EN
Is it possible to predict location, time and magnitude of earthquakes through identifying their precursors based on remotely sensed data? Earthquakes are usually preceded by unusual natural incidents that are considered as earthquake precursors. With the recent advances in remote sensing techniques which have made it possible monitoring the earth’s surface with diferent sensors, scientists are now able to better study earthquake precursors. Thus, the present study aims at developing the algorithm of classic PS-InSAR processing for obtaining crustal deformation values at the epicenter of earthquakes with magnitude larger than 5.0 on the Richter scale and with oblique thrust faulting and then after calculating temperature values using remotely sensed thermal imagery at the epicenter of same earthquakes; thermal and crustal deformation anomalies were calculated using data mining techniques before earthquake occurrence. In the next stage, taking the correlation between thermal anomalies and crustal deformation anomalies at the epicenter of the study earthquakes into account, an integrated technique was proposed to predict probable magnitude and time of oblique thrust earthquakes occurrence over the earthquake-prone areas. Eventually, the validity of the proposed algorithm was evaluated for an earthquake with a diferent focal mechanism. The analysis results of the thermal anomalies and crustal deformation anomalies at the epicenter of April 16, 2016, Japan-Kumamoto earthquake of magnitude 7.0 with strike-slip faulting, showed completely diferent trends than the suggested patterns by the proposed algorithm.
PL
W dzisiejszych czasach istnieje wiele metod i dobrych praktyk w inżynierii oprogramowania, które mają na celu zapewnienie wysokiej jakości tworzonego oprogramowania. Jednakże pomimo starań twórców oprogramowania, często w projektach występują defekty, których usuwanie wiąże się często z dużym nakładem finansowym oraz nakładem czasu. Artykuł prezentuje przykładowe podejście do predykcji defektów w projektach informatycznych opierając się na modelach predykcyjnych zbudowanych w oparciu o informacje historyczne oraz metryki produktu, zebrane z różnych repozytoriów danych.
EN
Nowadays, there are many methods and good practices in software engineering that are aimed at providing high quality of created software. However, despite the efforts of software developers, there are often defects in projects, the removal of which is often associated with a large financial and time expenditure. The article presents an example approach to defect prediction in IT projects based on predictive models based on historical information and product metrics, collected from various data repositories.
16
Content available Statistical Modelling of Emergency Service Responses
EN
Aim: The aim of this article is to demonstrate the applicability of historical emergency-response data – gathered from decision-support systems of emergency services – in emergency-response statistical modelling. Project and methods: Building models of real phenomena is the first step in making and rationalising decisions regarding these phenomena. The statistical modelling presented in this article applies to critical-event response times for emergency services – counted from the moment the event is reported to the beginning of the rescue action by relevant services. And then, until the action is completed and services are ready for a new rescue action. The ability to estimate these time periods is essential for the rational deployment of rescue services taking into account the spatial density of (possible) critical events and the critical assessment of the readiness of these services. It also allows the assessment of the availability of emergency services, understood as the number of emergency teams which ensure operational effectiveness in the designated area. The article presents the idea of modelling emergency response times, the methods to approximate the distribution of random variables describing the individual stages and practical applications of such approximations. Due to editorial limitations, the article includes the results only for one district (powiat – second-level unit of local government and administration in Poland). Results: A number of solutions proposed in the article can be considered innovative, but special attention should be given to the methodology to isolate random variables included in the analysed database as single random variables. This methodology was repeatedly tested with a positive result. The study was based on data on critical events and emergency response times collected in the computerised decision-support system of the State Fire Service (PSP) in Poland. Conclusions: Presented in this article, the method of approximating the duration of individual stages of emergency response based on theoretical distributions of random variables is largely consistent with the empirical data. It also allows to predict how the system will work in the short-term (over a time span of several years). The predictive property of such modelling can be used to optimise the deployment and to determine the capabilities of individual rescue teams. These studies were conducted between 2012 and 2015 as part of a project funded by the National Centre for Research and Development (NCBR), agreement No. DOBR/0015/R/ID1/2012/03
PL
Cel: Celem artykułu jest zaprezentowanie możliwości wykorzystania danych historycznych dotyczących reakcji służb ratowniczych, gromadzonych w systemach wspomagania decyzji ich dysponentów, do statystycznego modelowania reakcji tych służb. Projekt i metody: Budowanie modeli rzeczywistych zjawisk stanowi pierwszy etap podejmowania i racjonalizacji decyzji dotyczących tych zjawisk. Zjawiskiem, którego modelowanie (w ujęciu statystycznym) prezentujemy w niniejszym artykule, jest czas reakcji służb ratowniczych na zaistniałe incydenty krytyczne – liczony od momentu zgłoszenia zdarzenia do podjęcia działań ratowniczych przez odpowiednie służby, a następnie ich zakończenia oraz odzyskania gotowości do powtórnej reakcji. Umiejętność oszacowania tych czasów jest niezbędna do racjonalnego rozmieszczenia służb ratowniczych na tle przestrzennej gęstości (możliwych) zdarzeń krytycznych oraz oceny stopnia gotowości tych służb. Pozwala ona również na oszacowanie dostępności służb ratowniczych, rozumianej jako liczba zespołów ratowniczych zapewniających skuteczność działań w wyznaczonym rejonie. W artykule zaprezentowano ideę modelowania czasu reakcji służb ratowniczych, metody aproksymacji rozkładów zmiennych losowych opisujących poszczególne jej etapy oraz praktyczne jej wykorzystanie. Ze względu na ograniczenia edytorskie zaprezentowano wyniki analiz jedynie dla jednego powiatu. Wyniki: Szereg proponowanych w artykule rozwiązań można zaliczyć do nowatorskich, a na szczególną uwagę zasługuje metodyka rozdzielenia zmiennych losowych ujętych w analizowanej bazie jako jedna zmienna losowa. Metodykę tę przetestowano wielokrotnie z pozytywnym rezultatem. Badania oparto na danych o zaistniałych incydentach krytycznych oraz czasach reakcji służb ratowniczych gromadzonych w systemie informatycznym wspomagania decyzji dysponentów Państwowej Straży Pożarnej w Polsce. Wnioski: Wskazany w niniejszym artykule sposób aproksymacji czasu trwania poszczególnych etapów procesu reagowania służb ratowniczych, teoretycznymi rozkładami zmiennych losowych pozwala na przewidywanie działania tego systemu w krótkiej (kilkuletniej) perspektywie czasowej. Własność predykcyjna takiego modelowania może być wykorzystana do optymalizacji rozmieszczenia i określenia potencjału poszczególnych jednostek ratowniczych. Badania te przeprowadzono w ramach projektu finansowanego przez Narodowe Centrum Badań i Rozwoju (nr umowy: DOBR/0015/R/ID1/2012/03 w latach 2012–2015).
EN
Proper water resources planning and management is based on reliable hydrological data. Missing rainfall and runoff observation data, in particular, can cause serious risks in the planning of hydraulics structures. Hydrological modeling process is quitely complex. Therefore, using alternative estimation techniques to forecast missing data is reasonable. In this study, two data-driven techniques such as Artificial Neural Networks (ANN) and Data Mining were investigated in terms of availability in hydrology works. Feed Forward Back Propagation (FFBPNN) and Generalized Regression Neural Networks (GRNN) methods were performed on rainfall-runoff modeling for ANN. Besides, Hydrological drought analysis were examined using data mining technique. The Seyhan Basin was preferred to carry out these techniques. It is thought that the application of different techniques in the same basin could make a great contribute to the present work. Consequently, it is seen that FFBPNN is the best model for ANN in terms of giving the highest R2 and lowest MSE values. Multilayer Perceptron (MLP) algorithm was used to predict the drought type according to limit values. This system has been applied to show the relationship between hydrological data and measure the prediction accuracy of the drought analysis. According to the obtained data mining results, MLP algorithm gives the best accuracy results as flow observation stations using SRI-3 month data.
PL
Zasadniczy aspekt uczenia maszynowego stanowi ocena jakości zbudowanych modeli. Niezbędne zatem staje się staranne zaplanowanie eksperymentów. Potrzebne jest zrozumienie skutków potencjalnych błędów i niedopatrzeń. W artykule przedstawiono techniki, które mogą zostać wykorzystane w eksperymencie uczenia maszynowego. Opisano między innymi walidację prostą i krzyżową – z uwzględnieniem wyboru modelu – oraz podział czasowy. Przedstawiono wady i zalety wymienionych technik, uwzględniające między innymi rozmiar wejściowej bazy czy typ danych.
EN
The key aspect of machine learning is the model performance evaluation. Therefore, it is necessary to carefully plan the experiments. There is a need to understand the consequences of potential mistakes or omissions. This paper presents various techniques that can be used in a machine learning experiment. Simple split and cross validation – with or without model selection – as well as time split have been described. The advantages and disadvantages of these techniques have been presented – for example in terms of input database size or data type.
EN
The problem of evaluation of decisions is considered, which evaluation consists in selecting from the set of possible decisions those that meet the decision-maker's preferences. The added value of solving this problem lies in the reduction of the number of decisions one can choose. Evaluation of decisions is based on their complete characteristics, rather than on a pre-defined quality indicator. The basis for the quality assessment are given pattern examples of decisions made. These are decisions that the decision maker has found to be exemplary or acceptable. They are used as defining his preferences. The methods proposed in this article concern the ordering and clustering of decisions based on their characteristics. The set of decisions selected by an algorithm is interpreted as recommended for the decision maker. Presented solutions can find a variety of applications, for example in investment planning, routing, diagnostics or searching through multimedia databases.
PL
Rozpatrywany jest problem ewaluacji decyzji polegający na wytypowaniu spośród możliwych decyzji tych, które spełniają preferencje decydenta. Użyteczność rozwiązania problemu polega na zredukowaniu liczby możliwych do wyboru decyzji. Ewaluacja decyzji bazuje na ich kompletnych charakterystykach, a nie na wcześniej zdefiniowanym wskaźniku jakości. Podstawą oceny jakości są wzorcowe przykłady decyzji. Są to decyzje, które decydent uznał za doskonałe lub akceptowalne. Wskazane przez decydenta przykłady są wykorzystywane jako określające jego preferencje. Proponowane w artykule metody dotyczą porządkowania i grupowania decyzji na podstawie ich charakterystyk. Wytypowany zbiór decyzji jest interpretowany jako rekomendowany dla decydenta. Przedstawione rozwiązania mogą znaleźć różnorakie zastosowania, np. w planowaniu inwestycji, trasowaniu, diagnostyce czy przeszukiwaniu multimedialnych baz danych.
PL
Artykuł prezentuje możliwość skorzystania z metod statystycznych automatyzujących dobór zmiennych objaśniających na przykładzie dobowego obciążenia Krajowego Systemu Elektroenergetycznego. Automatyzacja pozwala na optymalizację kosztów zakupu prognoz wejściowych dzięki minimalizacji ich liczby, a uzyskane wyniki pozwalają dodatkowo na zmniejszenie nakładów pracy związanych z wyborem parametrów wejściowych (zmiennych objaśniających) na potrzeby późniejszego opracowywania prognoz dobowego obciążenia KSE.
EN
The paper presents the possibility of using statistical methods to automate the selection of explanatory variables to balance the daily load of the National Power System (NPS). With automation, the cost of input forecast purchase may be optimized by minimizing their number, and the results also allow for a reduction in the effort required to select input parameters (explanatory variables) for later forecasting of NPS daily loads.
first rewind previous Strona / 20 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.