Ograniczanie wyników
Czasopisma help
Autorzy help
Lata help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 174

Liczba wyników na stronie
first rewind previous Strona / 9 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  eksploracja danych
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 9 next fast forward last
EN
Purpose: Diabetes is a chronic disease that pays for a large proportion of the nation's healthcare expenses when people with diabetes want medical care continuously. Several complications will occur if the polymer disorder is not treated and unrecognizable. The prescribed condition leads to a diagnostic center and a doctor's intention. One of the real-world subjects essential is to find the first phase of the polytechnic. In this work, basically a survey that has been analyzed in several parameters within the poly-infected disorder diagnosis. It resembles the classification algorithms of data collection that plays an important role in the data collection method. Automation of polygenic disorder analysis, as well as another machine learning algorithm. Design/methodology/approach: This paper provides extensive surveys of different analogies which have been used for the analysis of medical data, For the purpose of early detection of polygenic disorder. This paper takes into consideration methods such as J48, CART, SVMs and KNN square, this paper also conducts a formal surveying of all the studies, and provides a conclusion at the end. Findings: This surveying has been analyzed on several parameters within the poly-infected disorder diagnosis. It resembles that the classification algorithms of data collection plays an important role in the data collection method in Automation of polygenic disorder analysis, as well as another machine learning algorithm. Practical implications: This paper will help future researchers in the field of Healthcare, specifically in the domain of diabetes, to understand differences between classification algorithms. Originality/value: This paper will help in comparing machine learning algorithms by going through results and selecting the appropriate approach based on requirements.
EN
Purpose: The main purpose of the article was to present the results of the analysis of the after-sales service process using data mining on the example of data gathered in an authorized car service station. As a result of the completed literature review and identification of cognitive gaps, two research questions were formulated (RQ). RQ1: Does the after-sales service meet the parameters of business process category? RQ2: Is the after-sales service characterized by trends or is it seasonal in nature? Design/methodology/approach: The following research methods were used in the study: quantitative bibliographic analysis, systematic literature review, participant observation and statistical methods. Theoretical and empirical study used R programming language and Gretl software. Findings: Basing on relational database designed for the purpose of carrying out the research procedure, the presented results were of: the analysis of the service sales structure, sales dynamics, as well as trend and seasonality analyses. As a result of research procedure, the effects of after-sales service process were presented in terms of quantity and value (amount). In addition, it has been shown that after-sales service should be identified in the business process category. Originality/value: The article uses data mining and R programming language to analyze the effects generated in after-sales service on the example of a complete sample of 13,418 completed repairs carried out in 2013-2018. On the basis of empirical proceedings carried out, the structure of a customer-supplier relationship was recreated in external and internal terms on the example of examined organization. In addition, the possibilities of using data generated from the domain system were characterized and further research directions, as well as application recommendations in the area of after-sales services was presented.
3
Content available remote Analiza dużych zbiorów danych
PL
Obecnie prawie każdy proces, w tym produkcyjny, generuje ogromne ilości danych. Gromadzimy je, czy jednak potrafimy odpowiednio wykorzystać? Jest to szczególnie ważne w kontekście Przemysłu 4.0, w którym dane są najważniejszym „surowcem”, a efektywne ich wykorzystanie jest kluczowe, głównie za sprawą wiedzy, którą można z nich pozyskać.
PL
W artykule dyskutowane są możliwości zastosowania metod syntezy logicznej w zadaniach eksploracji danych. W szczególności omawiana jest metoda redukcji atrybutów oraz metoda indukcji reguł decyzyjnych. Pokazano, że metody syntezy logicznej skutecznie usprawniają te procedury i z powodzeniem mogą być zastosowane do rozwiązywania ogólniejszych zadań eksploracji danych. W uzasadnieniu celowości takiego postępowania omówiono diagnozowanie pacjentów z możliwością eliminowania kłopotliwych badań.
EN
The article discusses the possibilities of application of logic synthesis methods in data mining tasks. In particular, the method of reducing attributes and the method of inducing decision rules is considered. It is shown that by applying specialized logic synthesis methods, these issues can be effectively improved and successfully used for solving data mining tasks. In justification of the advisability of such proceedings, the patient's diagnosis with the possibility of eliminating troublesome tests is discussed.
EN
In this paper we tackle the problem of vehicle re-identification in a camera network utilizing triplet embeddings. Re-identification is the problem of matching appearances of objects across different cameras. With the proliferation of surveillance cameras enabling smart and safer cities, there is an ever-increasing need to re-identify vehicles across cameras. Typical challenges arising in smart city scenarios include variations of viewpoints, illumination and self occlusions. Most successful approaches for re-identification involve (deep) learning an embedding space such that the vehicles of same identities are projected closer to one another, compared to the vehicles representing different identities. Popular loss functions for learning an embedding (space) include contrastive or triplet loss. In this paper we provide an extensive evaluation of triplet loss applied to vehicle re-identification and demonstrate that using the recently proposed sampling approaches for mining informative data points outperform most of the existing state-of-the-art approaches for vehicle re-identification. Compared to most existing state-of-the-art approaches, our approach is simpler and more straightforward for training utilizing only identity-level annotations, along with one of the smallest published embedding dimensions for efficient inference. Furthermore in this work we introduce a formal evaluation of a triplet sampling variant (batch sample) into the re-identification literature. In addition to the conference version [24], this submission adds extensive experiments on new released datasets, cross domain evaluations and ablation studies.
PL
Metodyka automatycznego odkrywania wiedzy o procesach wytwarzania i przetwarzania metali obejmuje problemy związane z (1) akwizycją danych i integracją ich w aspekcie dalszej eksploracji, (2) doborem i adaptacją metod uczenia maszynowego ― indukcji reguł, predykcji zmiennych ilościo wych i jakościowych, (3) formalizacją wiedzy w odpowiednich reprezentacjach: regułowej, zbiorów rozmytych, zbiorów przybliżonych czy wreszcie logiki deskrypcyjnej oraz (4) integracją wiedzy w repozytoriach opisanych modelami semantycznymi, czyli ontologiami. Autor przedstawił możliwość osiągnięcia równowagi pomiędzy wygodą użytkowania a precyzją w przypadku pozyskiwania wiedzy z małych zbiorów. Badania wykazały, że drzewa decyzyjne są wygodnym narzędziem odkrywania wiedzy i dobrze radzą sobie z problemami silnie nieliniowymi, a wprowadzenie dyskretyzacji poprawia ich działanie. Zastosowanie metod analizy skupień umożliwiło też wyciąganie bardziej ogólnych wniosków, przez co udowodniono tezę, że granulacja informacji pozwala znaleźć wzorce nawet w małych zbiorach danych. Opracowano w ramach badań procedurę postępowania w analizie małych zbiorów danych eksperymentalnych dla modeli multistage, multivariate & multivariable, co może w znacznym stopniu uprościć takie badania w przyszłości.
EN
The methodology of automatic knowledge discovery about metal production and processing processes includes problems related to (1) data acquisition and integration in the aspect of further exploration, (2) selection and adaptation of machine learning methods - rule induction, quantitative and qualitative variable prediction, (3) formalization knowledge in appropriate representations: rule, fuzzy sets, rough sets and finally descriptive logic, and (4) integration of knowledge in repositories described by semantic models or ontologies. The author presented the possibility of achieving a balance between ease of use and precision when acquiring knowledge from small collections. Research has shown that decision trees are a convenient tool for discovering knowledge and that they deal well with strongly non-linear problems, and the introduction of discretization improves their operation. The use of cluster analysis methods also made it possible to draw more general conclusions, which proved the thesis that granulation of information allows finding patterns even in small data sets. As part of the research, a procedure was developed for analyzing small experimental data sets for multistage, multivariate & multivariable models, which can greatly simplify such research in the future.
EN
Is it possible to predict location, time and magnitude of earthquakes through identifying their precursors based on remotely sensed data? Earthquakes are usually preceded by unusual natural incidents that are considered as earthquake precursors. With the recent advances in remote sensing techniques which have made it possible monitoring the earth’s surface with diferent sensors, scientists are now able to better study earthquake precursors. Thus, the present study aims at developing the algorithm of classic PS-InSAR processing for obtaining crustal deformation values at the epicenter of earthquakes with magnitude larger than 5.0 on the Richter scale and with oblique thrust faulting and then after calculating temperature values using remotely sensed thermal imagery at the epicenter of same earthquakes; thermal and crustal deformation anomalies were calculated using data mining techniques before earthquake occurrence. In the next stage, taking the correlation between thermal anomalies and crustal deformation anomalies at the epicenter of the study earthquakes into account, an integrated technique was proposed to predict probable magnitude and time of oblique thrust earthquakes occurrence over the earthquake-prone areas. Eventually, the validity of the proposed algorithm was evaluated for an earthquake with a diferent focal mechanism. The analysis results of the thermal anomalies and crustal deformation anomalies at the epicenter of April 16, 2016, Japan-Kumamoto earthquake of magnitude 7.0 with strike-slip faulting, showed completely diferent trends than the suggested patterns by the proposed algorithm.
8
Content available Statistical Modelling of Emergency Service Responses
EN
Aim: The aim of this article is to demonstrate the applicability of historical emergency-response data – gathered from decision-support systems of emergency services – in emergency-response statistical modelling. Project and methods: Building models of real phenomena is the first step in making and rationalising decisions regarding these phenomena. The statistical modelling presented in this article applies to critical-event response times for emergency services – counted from the moment the event is reported to the beginning of the rescue action by relevant services. And then, until the action is completed and services are ready for a new rescue action. The ability to estimate these time periods is essential for the rational deployment of rescue services taking into account the spatial density of (possible) critical events and the critical assessment of the readiness of these services. It also allows the assessment of the availability of emergency services, understood as the number of emergency teams which ensure operational effectiveness in the designated area. The article presents the idea of modelling emergency response times, the methods to approximate the distribution of random variables describing the individual stages and practical applications of such approximations. Due to editorial limitations, the article includes the results only for one district (powiat – second-level unit of local government and administration in Poland). Results: A number of solutions proposed in the article can be considered innovative, but special attention should be given to the methodology to isolate random variables included in the analysed database as single random variables. This methodology was repeatedly tested with a positive result. The study was based on data on critical events and emergency response times collected in the computerised decision-support system of the State Fire Service (PSP) in Poland. Conclusions: Presented in this article, the method of approximating the duration of individual stages of emergency response based on theoretical distributions of random variables is largely consistent with the empirical data. It also allows to predict how the system will work in the short-term (over a time span of several years). The predictive property of such modelling can be used to optimise the deployment and to determine the capabilities of individual rescue teams. These studies were conducted between 2012 and 2015 as part of a project funded by the National Centre for Research and Development (NCBR), agreement No. DOBR/0015/R/ID1/2012/03
PL
Cel: Celem artykułu jest zaprezentowanie możliwości wykorzystania danych historycznych dotyczących reakcji służb ratowniczych, gromadzonych w systemach wspomagania decyzji ich dysponentów, do statystycznego modelowania reakcji tych służb. Projekt i metody: Budowanie modeli rzeczywistych zjawisk stanowi pierwszy etap podejmowania i racjonalizacji decyzji dotyczących tych zjawisk. Zjawiskiem, którego modelowanie (w ujęciu statystycznym) prezentujemy w niniejszym artykule, jest czas reakcji służb ratowniczych na zaistniałe incydenty krytyczne – liczony od momentu zgłoszenia zdarzenia do podjęcia działań ratowniczych przez odpowiednie służby, a następnie ich zakończenia oraz odzyskania gotowości do powtórnej reakcji. Umiejętność oszacowania tych czasów jest niezbędna do racjonalnego rozmieszczenia służb ratowniczych na tle przestrzennej gęstości (możliwych) zdarzeń krytycznych oraz oceny stopnia gotowości tych służb. Pozwala ona również na oszacowanie dostępności służb ratowniczych, rozumianej jako liczba zespołów ratowniczych zapewniających skuteczność działań w wyznaczonym rejonie. W artykule zaprezentowano ideę modelowania czasu reakcji służb ratowniczych, metody aproksymacji rozkładów zmiennych losowych opisujących poszczególne jej etapy oraz praktyczne jej wykorzystanie. Ze względu na ograniczenia edytorskie zaprezentowano wyniki analiz jedynie dla jednego powiatu. Wyniki: Szereg proponowanych w artykule rozwiązań można zaliczyć do nowatorskich, a na szczególną uwagę zasługuje metodyka rozdzielenia zmiennych losowych ujętych w analizowanej bazie jako jedna zmienna losowa. Metodykę tę przetestowano wielokrotnie z pozytywnym rezultatem. Badania oparto na danych o zaistniałych incydentach krytycznych oraz czasach reakcji służb ratowniczych gromadzonych w systemie informatycznym wspomagania decyzji dysponentów Państwowej Straży Pożarnej w Polsce. Wnioski: Wskazany w niniejszym artykule sposób aproksymacji czasu trwania poszczególnych etapów procesu reagowania służb ratowniczych, teoretycznymi rozkładami zmiennych losowych pozwala na przewidywanie działania tego systemu w krótkiej (kilkuletniej) perspektywie czasowej. Własność predykcyjna takiego modelowania może być wykorzystana do optymalizacji rozmieszczenia i określenia potencjału poszczególnych jednostek ratowniczych. Badania te przeprowadzono w ramach projektu finansowanego przez Narodowe Centrum Badań i Rozwoju (nr umowy: DOBR/0015/R/ID1/2012/03 w latach 2012–2015).
EN
The article considers the problem of classification based on the given examples of classes. As a feature vector, a complete characteristic of object is assumed. The peculiarity of the problem being solved is that the number of examples of the class may be less than the dimension of the feature vector, and also most of the coordinates of the feature vector can be correlated. As a consequence, the feature covariance matrix calculated for the cluster of examples may be singular or ill-conditioned. This disenable a direct use of metrics based on this covariance matrix. The article presents a regularization method involving the additional use of statistical properties of the environment.
PL
W artykule rozpatrywany jest problem klasyfikacji na podstawie wskazanych przykładów klas. Jako wektor cech przyjmuje się kompletną charakterystykę obiektów. Osobliwość rozwiązywanego zadania wynika z tego, że liczba przykładów klasy może być mniejsza od wymiaru wektora cech, a także wektor cech może zawierać współrzędne skorelowane. W konsekwencji macierz kowariancji cech obliczana dla klastra przykładów może być osobliwa albo źle uwarunkowana. Uniemożliwia to bezpośrednie stosowanie metryk bazujących na tej macierzy kowariancji. W artykule została przedstawiona metoda regularyzacji polegająca na dodatkowym wykorzystaniu statystycznych właściwości środowiska.
EN
The problem of evaluation of decisions is considered, which evaluation consists in selecting from the set of possible decisions those that meet the decision-maker's preferences. The added value of solving this problem lies in the reduction of the number of decisions one can choose. Evaluation of decisions is based on their complete characteristics, rather than on a pre-defined quality indicator. The basis for the quality assessment are given pattern examples of decisions made. These are decisions that the decision maker has found to be exemplary or acceptable. They are used as defining his preferences. The methods proposed in this article concern the ordering and clustering of decisions based on their characteristics. The set of decisions selected by an algorithm is interpreted as recommended for the decision maker. Presented solutions can find a variety of applications, for example in investment planning, routing, diagnostics or searching through multimedia databases.
PL
Rozpatrywany jest problem ewaluacji decyzji polegający na wytypowaniu spośród możliwych decyzji tych, które spełniają preferencje decydenta. Użyteczność rozwiązania problemu polega na zredukowaniu liczby możliwych do wyboru decyzji. Ewaluacja decyzji bazuje na ich kompletnych charakterystykach, a nie na wcześniej zdefiniowanym wskaźniku jakości. Podstawą oceny jakości są wzorcowe przykłady decyzji. Są to decyzje, które decydent uznał za doskonałe lub akceptowalne. Wskazane przez decydenta przykłady są wykorzystywane jako określające jego preferencje. Proponowane w artykule metody dotyczą porządkowania i grupowania decyzji na podstawie ich charakterystyk. Wytypowany zbiór decyzji jest interpretowany jako rekomendowany dla decydenta. Przedstawione rozwiązania mogą znaleźć różnorakie zastosowania, np. w planowaniu inwestycji, trasowaniu, diagnostyce czy przeszukiwaniu multimedialnych baz danych.
EN
The underground mining process can be analysed with a data-oriented or process-oriented approach. The first of them is popular and wide known as data mining while the second is still not often used in the conditions of the mining companies. The aim of this paper is an overview of data mining and process mining applications in an underground mining domain and an investigation of the most popular analytic techniques used in the defined analytic perspectives (“Diagnostics and machinery”, “Geomechanics”, “Hazards”, “Mine planning and safety”). In the paper two research questions are formulated: RQ1: What are the most popular data mining/process mining tasks in the analysis of the underground mining process? and RQ2: What are the most popular data mining/process mining techniques applied in the multi-perspective analysis of the underground mining process? In the paper sixty--two published articles regarding to data mining tasks and analytic techniques in the mentioned domain have been analysed. The results show that predominatingly predictive tasks were formulated with regard to the analysed phenomena, with strong overrepresentation of classification task. The most frequent data mining algorithms is comprised of the following: artificial neural networks, decision trees, rule induction and regression. Only a few applications of process mining in analysis of the underground mining process have been found – they were briefly described in the paper.
PL
Celem artykułu jest przegląd zastosowań eksploracji danych (data mining) i procesów (process mining) w analizie procesu wydobywczego w kopalniach podziemnych oraz identyfikacja najpopularniejszych technik analizy danych w tym zakresie. W artykule sformułowano dwa pytania badawcze: P1: Jakie są najpopularniejsze zadania eksploracji danych/eksploracji procesów w analizie procesu wydobywczego w kopalniach podziemnych? oraz P2: Jakie są najpopularniejsze techniki eksploracji danych/eksploracji procesów stosowane w wielowymiarowej analizie procesu wydobywczego w kopalniach podziemnych? W artykule przeanalizowano sześćdziesiąt dwie opublikowane prace dotyczące eksploracji danych w ujęciu zdefiniowanych perspektyw analitycznych (“Diagnostyka i maszyny”, “Geomechanika”, “Zagrożenia”, “Projektowanie kopalń i bezpieczeństwo”). Wyniki pokazują, że w odniesieniu do analizowanych zjawisk formułowano głównie zadania predykcyjne, z silną nadreprezentacją zadania klasyfikacji. Do najczęściej wykorzystywanych technik eksploracji danych należą: sztuczne sieci neuronowe, drzewa decyzyjne, indukcja reguł i regresja. Eksploracja procesów w analizie procesu wydobywczego w kopalniach podziemnych została opisana tylko w kilku artykułach, które pokrótce omówiono.
EN
Finding clusters in high dimensional data is a challenging research problem. Subspace clustering algorithms aim to find clusters in all possible subspaces of the dataset, where a subspace is a subset of dimensions of the data. But the exponential increase in the number of subspaces with the dimensionality of data renders most of the algorithms inefficient as well as ineffective. Moreover, these algorithms have ingrained data dependency in the clustering process, which means that parallelization becomes difficult and inefficient. SUBSCALE is a recent subspace clustering algorithm which is scalable with the dimensions and contains independent processing steps which can be exploited through parallelism. In this paper, we aim to leverage the computational power of widely available multi-core processors to improve the runtime performance of the SUBSCALE algorithm. The experimental evaluation shows linear speedup. Moreover, we develop an approach using graphics processing units (GPUs) for fine-grained data parallelism to accelerate the computation further. First tests of the GPU implementation show very promising results.
EN
The paper describes a system for monitoring and diagnosing a gantry. The main goal of the system is to acquire, visualize and monitor vibration levels of the gantry crucial elements. The system is also equipped with a computing and analytical part which enables predictive maintenance related to the vibration level assessment. The system architecture can be used in other applications too, i.e. those which require a wireless network of vibration sensors to carry out diagnostic tasks.
PL
W artykule przedstawiono system monitorowania i diagnostyki suwnicy bramowej. Głównym zadaniem systemu jest akwizycja, wizualizacja i monitorowanie poziomu drgań newralgicznych elementów suwnicy. System wyposażony jest również w część obliczeniowoanalityczną, umożliwiającą realizację zadań predykcyjnego utrzymania ruchu (ang. predictive maintenance) związanych z oceną poziomu drgań. Architektura systemu umożliwia wykorzystanie go również do innych zastosowań, w których dla realizacji zadania diagnostyki wymagana jest bezprzewodowa sieć czujników drgań.
EN
Detecting and distinguishing vehicles with a maximum permissible weight up to 3.5 tonnes, among others required in the TLS 8+1 classification, due to the similar dimensions of selected vehicle groups is often a relatively complex process that requires the use of extensive classification methods. Detection of commercials vans is particularly important. Their parameters are similar to lorry vehicles and their incorrect classification, eg in systems of weighing vehicles in motion, results in the lack of information on exceeding the permissible total weight. The article presents the selected classification method and its effectiveness.
EN
The properties of hypoeutectic Al–Si alloy (silumin) with the addition of elements such as Cr, Mo, V and W are described. Changes in silumin microstructure under the impact of these elements result in a change of the mechanical properties. The research includes presentation of procedure for the acquisition of knowledge about these changes directly from experimental results using mixed data mining techniques. The procedure for analyzing small sets of experimental data for multistage, multivariate and multivariable models has been developed. Its use can greatly simplify such research in the future. An interesting achievement is the development of a voting procedure based on the results of classification trees and cluster analysis.
EN
Currently, blended food has been a common menu item in fast food restaurants. The sales of the fast-food industry grow thanks to several sales strategies, including the “combos”, so, specialty, regional, family and buffet restaurants are even joining combos’ promotions. This research paper presents the implementation of a system that will serve as support to elaborate combos according to the preferences of the diners using data mining techniques to find relationships between the different dishes that are offered in a restaurant. The software resulting from this research is being used by the mobile application Food Express, with which it communicates through webservices. References
EN
This study utilizes citation analysis and automated topic analysis of papers published in International Conference on Agile Software Development (XP) from 2002 to 2018. We collected data from Scopus database, finding 789 XP papers. We performed topic and trend analysis with R/RStudio utilizing the text mining approach, and used MS Excel for the quantitative analysis of the data. The results show that the first five years of XP conference cover nearly 40% of papers published until now and almost 62% of the XP papers are cited at least once. Mining of XP conference paper titles and abstracts result in these hot research topics: "Coordination", "Technical Debt", "Teamwork'', "Startups" and "Agile Practices", thus strongly focusing on practical issues. The results also highlight the most influential researchers and institutions. The approach applied in this study can be extended to other software engineering venues and applied to large-scale studies.
18
Content available remote Medical prescription classification: a NLP-based approach
EN
The digitization of healthcare data has been consolidated in the last decade as a must to manage the vast amount of data generated by healthcare organizations. Carrying out this process effectively represents an enabling resource that will improve healthcare services provision, as well as on-the-edge related applications, ranging from clinical text mining to predictive modelling, survival analysis, patient similarity, genetic data analysis and many others. The application presented in this work concerns the digitization of medical prescriptions, both to provide authorization for healthcare services or to grant reimbursement for medical expenses. The proposed system first extract text from scanned medical prescription, then Natural Language Processing and machine learning techniques provide effective classification exploiting embedded terms and categories about patient/- doctor personal data, symptoms, pathology, diagnosis and suggested treatments. A REST ful Web Service is introduced, together with results of prescription classification over a set of 800K+ of diagnostic statements.
19
EN
The paper presents the implementation and use of the IT system implemented in the Department of Pulmonology of The University Hospital in Cracow. The system integrates data from heterogeneous sources of therapy, diagnosis and medical test results of patients with Obstructive Sleep Apnea (OSA). The article presents the main architectural assumptions of the system, as well as an example of data mining analyzes based on the data served by the system. The example of the research aims to present the possibilities offered by the integration of clinical data in telemedicine and the diagnosis of patients with sleep disordered breathing that may lead to certain comorbidities and premature death.
EN
Online registers contain a large amount of data about healthcare providers in the Czech Republic. Information is available to all citizens and can be useful to patients, governmental organisations or employers. Based on these data, we are able to create a high-quality snapshot of the current state of healthcare providers. Interconnecting data from more data sources together is an interesting task, and accomplishing it enables us to ask more complex questions. This paper focuses on answering several questions about dentists in our country. A dataset from one online database was created, using automated data mining methods and a subsequent analysis. Results are presented via an online tool, which was provided to owners of the data. They reviewed our results and decided to use our findings for the presentation to the Czech government and subsequent negotiation processes. Our paper describes used methods, shows some results and outlines possibilities for further work.
first rewind previous Strona / 9 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.