Nowa wersja platformy, zawierająca wyłącznie zasoby pełnotekstowe, jest już dostępna.
Przejdź na https://bibliotekanauki.pl
Ograniczanie wyników
Czasopisma help
Lata help
Autorzy help
Preferencje help
Widoczny [Schowaj] Abstrakt
Liczba wyników

Znaleziono wyników: 136

Liczba wyników na stronie
first rewind previous Strona / 7 next fast forward last
Wyniki wyszukiwania
Wyszukiwano:
w słowach kluczowych:  Machine learning
help Sortuj według:

help Ogranicz wyniki do:
first rewind previous Strona / 7 next fast forward last
EN
How do socio-economic change and technological revolution change the way we manage people. How does the development of AI (Artificial Intelligence) affect the process of talent acquisition? The author will present the concepts of technological unemployment, creative class, millennials (generation Y), humanistic management, sustainable development, CSR and new managerial models in light of current social changes. Humanistic management as a broader concept, and humanistic talent attraction as its direct implication, will be presented as an answer to the current technological development. The author presents a narrower topic of human resources management but sees potential in the topic to develop a discussion on future of work in a broader sense. (original abstract)
2
100%
EN
Supervised learning methods are powerful techniques to learn a function from a given set of labeled data, the so-called training data. In this paper the support vector machines approach is applied to an image classification task. Starting with the corresponding Tikhonov regularization problem, reformulated as a convex optimization problem, we introduce a conjugate dual problem to it and prove that, whenever strong duality holds, the function to be learned can be expressed via the dual optimal solutions. Corresponding dual problems are then derived for different loss functions. The theoretical results are applied by numerically solving a classification task using high dimensional real-world data in order to obtain optimal classifiers. The results demonstrate the excellent performance of support vector classification for this particular problem.
3
Content available remote Komputerowe wspomaganie identyfikacji talentów w sporcie
100%
XX
W niniejszej pracy przedstawiono koncepcje systemu do komputerowego wspomagania identyfikacji talentów w sporcie. W koncepcji założono wykorzystanie metody badania podobieństwa opartej na optymalizacji wielokryterialnej oraz nadzorowanym algorytmie klasyfikacji z obszaru uczenia maszynowego: lesie drzew decyzyjnych. Dane do budowy wzorców dyscyplin sportowych pozyskano z publikacji (Santos, Dawson, Matias i in 2014). Dane te również posłużyły do wygenerowania testowego zestawu danych sportowców do przeprowadzenia eksperymentów badawczych. Badania przeprowadzono w autorskim programie oraz w środowisku chmurowym Microsoft Azure Machine Learning Studio. Przeprowadzone eksperymenty wykazały, że analizowane metody z powodzeniem można zastosować w procesie identyfikacji talentów sportowych.(abstrakt oryginalny)
EN
This paper presents the concept of the computer decision support system for talent identification in sport. In this concept the use of two methods was assumed: pattern recognition based on multicriteria optimization and machine learning supervised classification algorithm: decision forest. The data for sport dyscyplin patterns has obtained from publication (Santos, Dawson, Matias et al. 2014). This data also has been used to generate test data sets to research purposes. The researches were carried out in author's application and in the cloud environment Microsoft Azure Machine Learning Studio. The results show that both methods can be used with success to talent identification in sport.(original abstract)
EN
Feature selection plays vital role in the processing pipeline of today's data science applications and is a crucial step of the overall modeling process. Due to multitude of possibilities for extracting large and highly structured data in various fields, this is a serious issue in the area of machine learning without any optimal solution proposed so far. In recent years, methods based on concepts derived from information theory attracted particular attention, introducing eventually general framework to follow. The criterion developed by author et al., namely IIFS (Interaction Information Feature Selection), extended state-of-the-art methods by adopting interactions of higher order, both 3-way and 4-way. In this article, careful selection of data from industrial site was made in order to benchmark such approach with others. Results clearly show that including side effects in IIFS can reorder output set of features significantly and improve overall estimate of error for the selected classifier. (original abstract)
EN
This paper presents an overview of the effects of applying Cognitive Automation in the area of tasks which do not require physical activity. The influence of intelligent algorithms was presented in terms of building and maintaining competitive advantage on the marketplace. The following methods were listed as the stages of the development of research concerning Artificial Intelligence: Machine Learning, Natural Language Processing, Deep Learning, Neural Networks, which allow to exceed the algorithm sequencing. It was indicated that Cognitive Computing, conclusions made by algorithms, technologies and systems which operate like a human mind change the rules of the market game. They allow for predictive conclusions by offering verifiable hypotheses concerning the expected development of the situation. In summation, the analysed results of the study regarding the changes on the job market confirm empirically that robotic automation of service processes has reached into the tasks which so far have been of authentically human, mental nature. The potential for the further algorithmization of non-manual tasks has been created.(original abstract)
XX
Współczesne przedsiębiorstwa przemysłowe i usługowe aby sprostać rosnącym wymaganiom rynkowym, muszą oferować coraz szerszy asortyment oferowanych usług lub produktów, a także zapewnić ich wymaganą ilość i szybkość realizacji. Można tego dokonać poprzez zastosowanie uniwersalnych maszyn lub pracowników, którzy potrafią realizować różne zadania. Z drugiej strony, istnienie czynnika ludzkiego powoduje występowanie efektu uczenia. W związku z tym w niniejszej pracy analizowany jest powiązany problem harmonogramowania na identycznych maszynach równoległych przy kryterium minimalizacji długości uszeregowania oraz przy zmiennych czasach przezbrojeń wynikających z efektu uczenia pracowników. W celu opracowania efektywnego harmonogramu zaproponowano algorytmy metaheurystyczne. Zakres ich zastosowania został zweryfikowany w oparciu o analizę numeryczną.(abstrakt autora)
EN
In order to meet growing demands of the market modern manufacturing and service environments must offer an increasingly broad range of services or products as well as ensure their required amount and short lead times. It can be done by the application of universal machines or workers which are able to perform different tasks. On the other hand, human activity environments are often affected by learning. Therefore, in this paper, we analyse related problems, which can be expressed as the makespan minimization scheduling problem on identical parallel machines with variable setup times affected by learning of workers. To provide an efficient schedule, we propose metaheuristic algorithms. Their potential applicability is verified numerically.(author's abstract)
EN
The paper contributes to the problem solving in semantic browsing and analysis of scientific articles. With reference to presented visual interface, four - the most popular methods of mapping including own approach - MDS with spherical topology, have been compared. For a comparison quantitative measures were applied which allowed to select the most appropriate mapping way with an accurate reflection of the dynamics of data. For the quantitative analysis the authors used machine learning and pattern recognition algorithms and described: clusterization degree, fractal dimension and lacunarity. Local density differences, clusterization, homogeneity, and gappiness were measured to show the most acceptable layout for an analysis, perception and exploration processes. Visual interface for analysis how computer science evolved through the two last decades is presented on website. Results of both quantitative and qualitative analysis have revealed good convergence.(original abstract)
EN
While we would like to predict exact values, the information available, being incomplete, is rarely sufficient - usually allowing only conditional probability distributions to be predicted. This article discusses hierarchical correlation reconstruction (HCR) methodology for such a prediction using the example of bid-ask spreads (usually unavailable), but here predicted from more accessible data like closing price, volume, high/low price and returns. Using HCR methodology, as in copula theory, we first normalized marginal distributions so that they were nearly uniform. Then we modelled joint densities as linear combinations of orthonormal polynomials, obtaining their decomposition into mixed moments. Then we modelled each moment of the predicted variable separately as a linear combination of mixed moments of known variables using least squares linear regression. By combining these predicted moments, we obtained the predicted density as a polynomial, for which we can e.g. calculate the expected value, but also the variance to determine the uncertainty of the prediction, or we can use the entire distribution for, e.g. more accurate further calculations or generating random values. 10-fold cross-validation log-likelihood tests were conducted for 22 DAX companies, leading to very accurate predictions, especially when individual models were used for each company, as significant differences were found between their behaviours. An additional advantage of using this methodology is that it is computationally inexpensive; estimating and evaluating a model with hundreds of parameters and thousands of data points by means of this methodology takes only a second on a computer. (original abstract)
EN
Machine learning has received increased interest by both the scientific community and the industry. Most of the machine learning algorithms rely on certain distance metrics that can only be applied to numeric data. This becomes a problem in complex datasets that contain heterogeneous data consisted of numeric and nominal (i.e. categorical) features. Thus the need of transformation from nominal to numeric data. Weight of evidence (WoE) is one of the parameters that can be used for transformation of the nominal features to numeric. In this paper we describe a method that uses WoE to transform the features. Although the applicability of this method is researched to some extent, in this paper we extend its applicability for multi-class problems, which is a novelty. We compared it with the method that generates dummy features. We test both methods on binary and multi-class classification problems with different machine learning algorithms. Our experiments show that the WoE based transformation generates smaller number of features compared to the technique based on generation of dummy features while also improving the classification accuracy, reducing memory complexity and shortening the execution time. Be that as it may, we also point out some of its weaknesses and make some recommendations when to use the method based on dummy features generation instead.(original abstract)
11
Content available remote Selekcja zmiennych w klasyfikacji - propozycja algorytmu
100%
XX
Selekcja zmiennych w klasyfikacji obiektów ze zbiorem uczącym jest ważna zarówno w przypadku metod pojedynczych, jak i zagregowanych. Najprostszym sposobem selekcji jest sprawdzenie korelacji każdej zmiennej z prawidłową klasyfikacją obiektów na zbiorze uczącym. Ten naturalny sposób ma jednak poważne ograniczenia wynikające z tego, że im słabsza skala pomiaru wartości zmiennej, tym trudniej mierzyć siłę korelacji. W arty-kule zaproponowana jest metoda pomiaru siły korelacji za pomocą współczynnika korelacji liniowej pomiędzy odległościami pomiędzy parami obiektów na badanej zmiennej i na zmiennej reprezentującej etykiety klas. Zmienne, które mają siłę korelacji poniżej ustalone-go progu, są eliminowane. Efektywność takiej metody selekcji jest zbadana na zbiorach danych empirycznych z repozytorium UCI Uniwersytetu Kalifornijskiego (UCI Machine Learning Repository). Wyniki są porównane z wynikami procedur stepclass oraz Boruta dostępnymi w języku R.(abstrakt oryginalny)
EN
Selection of variables in classification is important both in the case of single and aggregated methods. The simplest way of selecting variables is to check their correlation with the proper classification of objects on the training set. This natural way, however, has serious limitations stemming from the fact that for weak measurement scales finding corre-lation is troublesome. The paper proposes a method of measuring the strength of correlation by means of the linear correlation coefficient based on the distances between pairs of obser-vations for arbitrary single attribute and the class labels attribute. The attributes with correla-tion below a certain threshold are rejected. The efficiency of the method is investigated on UCI data sets. The results are compared with stepclass and Boruta procedures available in R language.(original abstract)
EN
This paper reviews the existing literature on the combination of metaheuristics with machine learning methods and then introduces the concept of learnheuristics, a novel type of hybrid algorithms. Learnheuristics can be used to solve combinatorial optimization problems with dynamic inputs (COPDIs). In these COPDIs, the problem inputs (elements either located in the objective function or in the constraints set) are not fixed in advance as usual. On the contrary, they might vary in a predictable (non-random) way as the solution is partially built according to some heuristic-based iterative process. For instance, a consumer’s willingness to spend on a specific product might change as the availability of this product decreases and its price rises. Thus, these inputs might take different values depending on the current solution configuration. These variations in the inputs might require from a coordination between the learning mechanism and the metaheuristic algorithm: at each iteration, the learning method updates the inputs model used by the metaheuristic.
13
Content available remote Automatic diagnosis of primary headaches by machine learning methods
88%
EN
Primary headaches are common disease of the modern society and it has high negative impact on the productivity and the life quality of the affected person. Unfortunately, the precise diagnosis of the headache type is hard and usually imprecise, thus methods of headache diagnosis are still the focus of intense research. The paper introduces the problem of the primary headache diagnosis and presents its current taxonomy. The considered problem is simplified into the three class classification task which is solved using advanced machine learning techniques. Experiments, carried out on the large dataset collected by authors, confirmed that computer decision support systems can achieve high recognition accuracy and therefore be a useful tool in an everyday physician practice. This is the starting point for the future research on automation of the primary headache diagnosis.
EN
The paper presents an improved sample based rule- probability estimation that is an important indicator of the rule quality and credibility in systems of machine learning. It concerns rules obtained, e.g., with the use of decision trees and rough set theory. Particular rules are frequently supported only by a small or very small number of data pieces. The rule probability is mostly investigated with the use of global estimators such as the frequency-, the Laplace-, or the m-estimator constructed for the full probability interval [0,1]. The paper shows that precision of the rule probability estimation can be considerably increased by the use of m-estimators which are specialized for the interval [phmin, phmax] given by the problem expert. The paper also presents a new interpretation of the m-estimator parameters that can be optimized in the estimators. (original abstract)
15
Content available remote Analiza bazy wiedzy wyborów prezydenckich 2005 roku
75%
XX
Na podstawie analizy kampanii prezydenckiej w 2005r. zidentyfikowano najważniejsze elementy tej kampanii. Na ich podstawie zbudowano bazą wiedzy, której analiza z zastosowaniem metod uczenia maszynowego umożliwiła utworzenie reguł modelujących specyfiką programową i wizerunek medialny kandydatów, którzy uzyskali znaczne poparcie wyborców i kandydatów, którzy uzyskali znacznie mniejsze poparcie. Przeprowadzono analizą utworzonych reguł i podano wynikające z tej analizy wnioski. (abstrakt oryginalny)
EN
One can identify the most important elements of President Campaign in 2005 on a base of its analysis. One can built the knowledge base on this bases. There was created rules modeling specific programs and media image of the candidates using machine learning methods. This way candidates who got essential advocacy and those who got smaller advocacy were selected. One can make analysis of created rules and conclusions resulted from the analysis were mentioned. (original abstract)
16
Content available remote Pattern recognition approach to classifying CYP 2C19 isoform
75%
Open Medicine
|
2012
|
tom 7
|
nr 1
38-44
EN
In this paper a pattern recognition approach to classifying quantitative structure-property relationships (QSPR) of the CYP2C19 isoform is presented. QSPR is a correlative computer modelling of the properties of chemical molecules and is widely used in cheminformatics and the pharmaceutical industry. Predicting whether or not a particular chemical will be metabolized by 2C19 is of primary importance to the pharmaceutical industry. This task poses certain challenges. First of all analyzed data are characterized by a significant biological noise. Additionally the training set is unbalanced, with objects from negative class outnumbering the positives four times. Presented solution deals with those problems, additionally incorporating a throughout feature selection for improving the stability of received results. A strong emphasis is put on the outlier detection and proper model validation to achieve the best predictive power.
17
75%
EN
Objective: The technology developing before our eyes is entering many areas of life and has an increasing influence on shaping human behavior. Undoubtedly, it can be stated that one such area is trading on stock exchanges and other markets that offer investors the opportunity to allocate their capital. Thanks to widespread access to the Internet and the computing capabilities of computers used in the daily activities of investors, the nature of their working has changed significantly, compared to what we observed even 10-15 years ago. At present, stock exchange orders may be placed in person using various types of brokerage investment accounts, which allow the investor to view real-time quotations which opens up a whole new range of opportunities for investors. Its skillful application during the stock market game can positively influence a player's investment performance. Machine learning is a branch of artificial intelligence and computer science that focuses on using data and algorithms to solve decision-making problems based on large amounts of information. In machine learning, algorithms find patterns and relationships in large data sets and make the best decisions and predictions based on this analysis.Methodology: The main objective of this paper is to investigate and evaluate the applicability of machine learning for investment decisions in equity markets. The analysis undertaken focuses on so-called day-trading, i.e. investing for very short periods of time, often involving only a single trading session. The hypothesis adopted is that the use of machine learning can contribute to a positive return for a stock market player making short-term investments.Findings: This paper uses the Azure Microsoft Machine Learning Studio tool to enable machine learning-based calculations. It is a widely available cloud computing platform that provides an investor interested in creating a model and testing it. The calculations were made according to two schemes. The first involves teaching the model by taking 50% of the companies randomly selected from all companies, while the second involves teaching the model by taking 80% of the companies randomly selected from all companies.Value Added: The results from the study indicate that investors can use machine learning to earn returns that are attractive to them. Depending on the teaching model (50% or 80% companies), daily returns can range from 1.07% to even 4.23%.Recommendations: The results obtained offer investors the prospect of using the method presented in the article in their capital management strategies, which of course requires them to adapt the techniques used so far to the specifics of machine learning. However, it is necessary to note that the presented method requires that each time the data on which the forecast was made be updated. Further research is needed to determine the impact of the number of companies on the effectiveness of the learning process. (original abstract)
EN
The objective of this paper is to find new trends in the literature using machine learning and reference management systems within the theme of the Internationalization of Small and Medium Enterprises (SMEs). With help of topic modeling software 857 articles on the topic "Internationalization of SMEs from 2012 to 2021 were analyzed and ranked (citation index) through Endnote® library management systems. The search was focused only on the fields of social science and management. 85 documents were shortlisted from the original cluster to proceed with the text mining. Results show promising areas of research within the Internationalization of SMEs. Stand-out topics include Resource-based Theory, Dynamics Capabilities, International Entrepreneurship, and Ambidexterity among others. Endnote® Subject Bibliography found the most popular words and topics in the original database. Results showed that using Endnote® and MALLET topic modeling tool it is possible to analyze large amounts of publications and find new trends within a specific field. However, MALLET software needs experts in the field to identify and translate results into meaningful ideas. Endnote® seems to have a higher level of sophistication and a better visual interface, but among the disadvantages are the price of the tool and that it works better with their libraries or partner journals. (original abstract)
XX
Celem artykułu jest wskazanie korzyści płynących z zastosowania sztucznej inteligencji (AI) w badaniu sprawozdań finansowych. Posłużono się kwestionariuszem ankiety. Próbą badawczą objęto 206 praktyków i studentów audytu i rachunkowości. Zastosowano analizę czynnikową metodą głównych składowych z rotacją Promax. Wyniki wskazują, że w opinii respondentów zastosowanie sztucznej inteligencji zwiększa efektywność audytu. Sztuczna inteligencja usprawnia komunikację i obsługę klienta. Ponadto AI może zautomatyzować czasochłonne i rutynowe zadania. Powyższe trzy czynniki odpowiadają za 62,223% wariancji. Wyniki badania wskazują na korzyści płynące z implementacji sztucznej inteligencji w audycie i mogą wspierać menedżerów we wdrażaniu nowych technologii w ich organizacjach. Ograniczeniem badawczym jest fakt, że badanie koncentruje się na respondentach jedynie z Polski.(abstrakt oryginalny)
EN
The main objective of this paper is to identify the benefits of applying the Artificial Intelligence (AI) in the audit sector. The study employed a questionnaire for a research sample including 206 auditing and accounting practitioners and students. Data were collected via an online survey. A principal axis factor analysis with the Promax rotation was conducted to assess the underlying structure for the points of the questionnaire. The research outcomes indicate that, in the opinion of the respondents, AI adoption increases audit efficiency, and enhances client communication and service. Finally, AI can also automate time-consuming and routine tasks. The three indicated factors account for 62.223% variance. The findings reveal the advantages of AI adoption and could support managers in deploying new technology in their organizations. The research limitation concerns the fact that this study focused only on respondents from Poland.(original abstract)
XX
Celem niniejszego artykułu było zbadanie możliwości wykorzystania algorytmu Extreme Gradient Boosting (XGBoost) jako narzędzia prognozowania obrotów przedsiębiorstwa. Na studium przypadku wybrano udostępnione przez firmę Rossmann (wraz z prośbą o opracowanie innowacyjnej metody prognozowania) dane, obejmujące informacje z mikro- i makrootoczenia oraz obrotów 1115 oddziałów. Działanie algorytmu porównano z klasycznymi modelami SARIMAX i Holta–Wintersa, wykorzystując walidację krzyżową oraz testy statystycznej istotności różnic trafności predykcji. Badano metryki średniego błędu procentowego, współczynnik Theila oraz skorygowany współczynnik determinacji. Wyniki przekazano do weryfikacji firmie Rossmann. Potwierdzono, iż XGBoost po zastosowaniu odpowiedniej obróbki danych i sposobu uczenia osiąga lepsze rezultaty niż modele klasyczne.
EN
The goal of this paper was to investigate use of the Extreme Gradient Boosting XGBoost algorithm as a forecasting tool. The data provided by the Rossman Company, with a request to design an innovative prediction method, has been used as a base for this case study. The data contains details about micro- and macro-environment, as well as turnover of 1115 stores. Performance of the algorithm was compared to classical forecasting models SARIMAX and Holt–Winters, using time-series cross validation and tests for statistical importance in prediction quality differences. Metrics of root mean squared percentage error (RMSPE), Theil’s coefficient and adjusted correlation coefficient were analyzed. Results where then passed to Rossman for verification on a separate validation set, via Kaggle.com platform. Study results confirmed, that XGBoost, after using proper data preparation and training method, achieves better results than classical models.
first rewind previous Strona / 7 next fast forward last
JavaScript jest wyłączony w Twojej przeglądarce internetowej. Włącz go, a następnie odśwież stronę, aby móc w pełni z niej korzystać.